Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
The purpose of MassTransit request/response
I'm just getting started to implement the event messages in the microservices with MassTransit. The message publish makes a lot of sense to me that beneficial to the responsiveness of the microservice. However the request and response does seems comparable with the API synchronous call that the response has to return in order to complete the entire procedural logics. I just don't get the idea why and when should I implement this pattern ? Isn't that API call is much more simpler than place the request in a queue and wait until it was processed only then get the response ?
Massive questions here!
Isn't that API call is much more simpler than place the request in a queue and wait until it was processed only then get the response
Simple terms, no. What if this was a customer completing an order, and there was a network timeout, database lock or some other common event. The order is lost.
What about setting up DNS, Load Balancers, multiple instances of the API, Swagger Docs, Validation, Good Error responses and then Monitoring. Datadog, APM and logging. See you in a few months :)
Create a consumer, start it, ensure 100% delivery and process in a few hours not months.
Building in load balancing, service discovery, retry and error logging. Not to mention amazing operability with APM tracing.
request and response does seems comparable with the API
It's not.. it better but a completely different way of looking at the problem. You have to think in terms of Events. What happens, what data changes and what I need to care about.
look into Event Driven Architecture and choreography, orchestration and Sagas.
This can't be explained within a Stack Overflow post... you have to change your mindset a little.
But in simple terms, a traditional API completes a process, Say buying a PS5. When the API call finishes the order is complete. This can take 5 - 30 seconds. during this time the server is locked up, a tread is locked, the customer is waiting and the changes of something going wrong is high.
If you raise events, the process completes instantly. The customer is returns to a holding page and the server is free to process other orders.
When a Consumer processes the order, an event is raised and the customer can see the order was successful.
It's also possible to return data from an Event. however, I have found, keeping the data minimal is best. Allow the API to raise events, consumers to process them and something like Sockets/SignalR/Blazor to communicate with the customer/ui.
I have a video on Saga's that you might find useful. I think this will help you understand how data can move between events.
https://youtu.be/Vwfngk0YhLs
You described the advantages of event based approach, but if I understand correctly, the question was not about that. The question was about request-reply pattern using messaging - that specific case when first service sends request message to the second one and then blocks the thread and waits for a response message. How this approach is better than just doing an api call?
load balancing, Strong contracts, no DNS, private services (No public APIs). Easy A/B-Blue/Green deployments. Just a simple consumer, waiting to do its job :)
The difference that makes a clear distinction to me is the request/response somehow ignore the need of authentication/authorization when the sender try to access the other microservice
There are little differences between traditional http API call and mass transit request/response pattern, also they are similar:
In both, the client are blocked
http call is simple and performant
mass transit req/resp gives you less coupling, but there is little overhead in terms of performance
So, if you are already using mass transit in your environment, if you call will be less than 30secs, you can benefit from less coupling of MTransit.If not, you can take full advantage of other publish, subscribe pattern of Transit to make your implementation scaleable .
If you don't have messages queue infrastructure already in your environment, and your microservices are simple short call with little needs for scaling in future, stay with traditional http API call. It will make your life simple.
Watch below for further information:
https://youtu.be/NjsoykEOkrk?si=GtD2B_sHIul6mcK6
|
STACK_EXCHANGE
|
<?php
/**
* Erdiko Authorize Test Suite
* Be sure to name your test cases MyClassTest.php
* where MyClass is the functionality being tested
*/
namespace tests;
class AllTests
{
public static function suite()
{
$suite = new \PHPUnit_Framework_TestSuite('ErdikoTests');
$testFiles = AllTests::_getTestFiles();
foreach ($testFiles as $file) {
$suite->addTestFile($file);
}
return $suite;
}
private static function _getTestFiles()
{
$folders = glob('./phpunit/*');
$tests = glob('./phpunit/*Test.php'); // Get top level tests
foreach($folders as $folder) {
$newTests = glob("{$folder}/*Test.php");
$tests = array_merge($tests, $newTests);
}
return $tests;
}
}
|
STACK_EDU
|
Xiaomi Mi Mix 2 review
I'm in the process on setting up a new laptop computer for my wife.
The first thing I've done is to create a recovery DVD, in the event that Windows and all the device drivers need to be reinstalled. So far the disk creation and verification have taken over 2 hours (and it's not finished yet!).
Why is this waste of my time necessary? So far as I can see, the only benefit is to the OS publishers and device suppliers. They can save a few measly pence on not having to supply me with the CD's containing their software.
Packard Bell, from PC World.
Is a good idea, but most laptops come with a restore partition (As with my Acer) there is a function in the BIOS to restore the machine to it's original state should things go wrong....shouldn't take that long to create backup DVD's though.... Mine currently takes 6 DVD's to backup everything, I can then choose (from the BIOS) to restore from the DVD'S I have created (all my data) or from the restore partition on the HDD
This was lead by Microsoft who decided that to cut down on Piracy they would stop major OEMs from supplying windows discs with their PCs.
Some manufacturers rebelled and carried on, others supply the OS disc at additional cost.
Most manufacturers now supply a recovery disc that either has the OS & drivers on the disc as an image file, or has a small application that wipes the C: partition and then ghosts the OS back from a hidden partition, neither is a perfect solution, but it does save users from having to manually reinstall the OS and drivers, which is not always an easy process, but it is much easier than it used to be under win 95 & win 98.
I remember when I worked in service centre, the ghost image server we used for Windows 98 machines died, and couldn't be recovered, so we used to have to manually load the OS & drivers, we had about 8 or 9 different motherboards in our PCs, and they all needed different drivers, figuring which combination of drivers went together for which model was a nightmare.
I understand your frustration. When I first got my laptop becuase it had so much bloatware on it it took longer to start up than the 5 year old pc it replaced (which was built for me, and thus had a clean windows install. Even if you uninstall it all, it still leaves old files and registry entries. Thankfully I got a Vista Express upgrade (as I bought just after xmas), so I now have a vista disk I cna reinstall fresh from with no problems. Even so, many drivers that new computers come with are already out of date, itwould be much better to just boot up on generic windows drivers and then provide links to downlaod sites, as this would ensure up to date drivers, and would also provide a place to easily get future updates.
>>most laptops come with a restore partition (As with my Acer) there is a function in the BIOS to restore the machine to it's original state should things go wrong<<
Yes, crosstrainer - my wife's new laptop has that feature. But it's not much use of your HDD goes belly up (or, more accurately, head down)!
This thread is now locked and can not be replied to.
|
OPCFW_CODE
|
Cloud computing enables your nonprofit to automate all of your marketing messaging and target donors at every stage of the funnel. More than 90% of nonprofits are using cloud computing today with half of them using the CRM system over public cloud service.
Type of Microsoft Cloud Deployment Type
Microsoft Cloud is a cloud service platform used to build, deploy, and manage services and applications. The computing platform enables you to use the technology or software your business needs for a fee.
The tech giant offers three distinct cloud services platform tailored for your specific business needs, which includes:
- Software as a Service (SaaS): Microsoft’s Software as a Service allows you to use the software or the app you need. Microsoft manages and maintains the servers, middleware, networking, virtualization, operating system, runtime, and data, so there’s less for you to worry about.
- Platform as a Service (PaaS): With Platform as a Service, you can rent every service except the app. In this service, you can manage your data and apps while Microsoft handles the operating system, networking, storage, servers, runtime, etc. This gives you more control over SaaS model.
- Infrastructure as a Service (IaaS): With Infrastructure as a Service, you can license and maintain the tools and the hardware. You can also manage the apps, data, middleware, runtime, and operating system, while Microsoft manages the virtualization, storage, servers, and networking.
Benefits of Public Cloud Service with Microsoft
Cost Effectiveness - Cloud solutions eliminate costly, inefficient and outdated information technology environments, which slow down the pace and drive up the cost of change. Instead of establishing and maintaining in-house infrastructure, including physical data centers and servers, cloud computing provides access to efficient technology services for every functional area and produces cost savings in hardware investment, ongoing maintenance and equipment.
Donor Outreach and Fundraising – Microsoft low – cost with various grant programs for Non Profit across the globe, scalable infrastructure helps nonprofits manage donor outreach and fundraising more efficiently while paying for only what they use. Less money spent on IT means more money to focus on social impact.
Cloud Security – Ultimately, the most important factors there are for any industries. Nonprofits are facing increased compliance and privacy requirements, security is always top of mind. Different security multi - level configuration are available in Microsoft Cloud. This include security configurations at application level such as Microsoft Dynamics 365 for Non Profit module. Consumers are provided with security configurations, backups and disaster recovery, advance user management, advanced network security, API authentication and many more.
Cloud Mobility – recent study concluded that more than 42% of nonprofits agreed that ‘remote access’ is becoming competitive advantage in the industry. These organisations have people working everywhere – there are constant off – site projects happening, remote writers or participants, and board members who are constantly on the road. With cloud computing, it’s easy to access data from any location, any time. Better yet, advanced collaboration features of Microsoft Dynamics 365 for Non Profit making it easy to collaborate and share data with users, regardless of location without compromising security capability.
|
OPCFW_CODE
|
Technically Speaking: How Continuous Fuzzing Secures Software While Increasing Developer Productivity
Motional’s Technically Speaking series takes a deep dive into how our top team of engineers and scientists are making driverless vehicles a safe, reliable, and accessible reality.
In Part 1, we introduced our approach to machine learning and how our Continuous Learning Framework allows us to train our autonomous vehicles faster. Part 2 discussed how we are building a world-class offline perception system to automatically label data sets. Part 3 announced Motional’s release of nuPlan, the world’s largest public dataset for prediction and planning. Part 4 explains how Motional uses multimodal prediction models to help reduce the unpredictability of human drivers. Part 5 looks at how closed-loop testing will strengthen Motional's planning function, making the ride safer and more comfortable for passengers. Part 6 explains how Motional is using Transformer Neural Networks to improve AV perception performance. In Part 7, Motional cybersecurity engineer Zhen Yu Ding explains how we use continuous fuzzing to save time.
Successful companies must minimize risk. Tabletop exercises are a common risk mitigation technique, where companies gameplay a problematic scenario that stress tests their internal policies and procedures. It allows the company to uncover gaps and weaknesses in a controlled environment versus the public domain.
Developing secure software for a public product has a similar goal. We want to make sure the version used by the public is as safe and secure as possible before deploying it – or if there is a glitch, that the system can handle it gracefully. At Motional, we employ a technique known as fuzz testing, or fuzzing, which stress tests our autonomous vehicle software through the use of randomized, arbitrary, or unexpected inputs.
By throwing curveballs and changeups at our software, we’re able to uncover defects and edge cases that could cause errors or corrupt the software. Motional believes that regular fuzz testing, with non-brittle tests, enhances productivity, and results in more robust software and a safer, more secure AV.
Fuzzing in a Nutshell
Fuzzing exercises software through the use of randomized, arbitrary, or unexpected inputs. For example, the software may expect dates formatted a certain way. Fuzzing finds out what happens if that formatting assumption is broken. It also lets us see the consequences of these errors, which could range from an error message to a full system crash. When we’re designing software that impacts how a vehicle operates, we want to be sure that the system can handle errors gracefully.
We can fuzz an individual function in code, a binary that reads from a file or stream, an API that accesses a service, or a port (physical or logical) on a device, to name a few. For practical examples on how to create fuzz tests, we suggest browsing through Google’s fuzzing tutorials on GitHub.
When stress testing software that’s written in memory-unsafe languages, such as C++, fuzzing is a powerful technique for finding memory corruption vulnerabilities and undefined behavior. Attackers may use such vulnerabilities to trigger software crashes that disrupt service, leak sensitive information, or even take control of the software system. Dynamic analysis tools such as the AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, and Valgrind memcheck often complement fuzzing as these tools can detect more such vulnerabilities.
We can see evidence of fuzzing’s effectiveness in Google’s OSS-Fuzz project, where Google fuzz tests open source software. OSS-Fuzz has found tens of thousands of defects and security vulnerabilities in open source software .
National and international automotive cybersecurity standards, such as ISO 21434 and Singapore’s TR 68, recommend or require fuzz testing because of its ability to test rare edge cases and find defects. Motional’s first-in-the-industry AVCDL also requires fuzz testing.
Continuous Testing in the Face of Software Change
One-off fuzzing is useful for identifying weaknesses in a particular software version. However, due to our Continuous Learning Framework, Motional’s software changes daily, requiring constant testing and verification.
Continuous fuzzing is the fuzzing analogue of continuous testing, where tests are triggered periodically to catch new issues in code, ideally before integrating code changes into production. Continuous testing improves developer productivity by giving rapid feedback on code changes while still being built rather than uncovering issues at the end of a build. The actual frequency of testing is up to the developer. It could happen as soon as a line of code changes; at Motional, we often test on a daily cadence. Continuous fuzzing augments this battery of continuous tests.
The challenge is to set limits when fuzz testing. Unlike other testing techniques that use fixed inputs, fuzz testing, by its nature, requires random inputs from a nearly boundless array of scenarios. It’s simply not possible to test for every scenario; at some point, a test suite must decide whether a code change is acceptable or not within a reasonable amount of time. Recent research found that running fuzz tests for as little as 15 minutes can still be effective at detecting and stopping important defects from reaching production and being deployed .
An even faster approach is to repurpose fuzz tests as regression tests that execute on saved inputs generated from prior fuzzing. Many fuzz testing tools preserve inputs that have triggered previously unobserved software behavior. We can preserve a corpus of such inputs and create a new regression test by replaying the inputs on future fuzz tests. This enables fuzz tests to provide developers with rapid feedback on software correctness.
Keeping Fuzz Tests Maintainable
Continuous testing has taught software engineers the importance of keeping a test suite with a low maintenance cost . Maintenance costs, which can include lost time adjusting the tests as well as monetary investments, are a burden on developer productivity. And if continuous testing proves costly, that could jeopardize acceptance of continuous fuzzing, especially if fuzzing is new to developers.
An example of a particularly dangerous maintenance problem are brittle tests.
During a brittle test, software can break even under correctly implemented changes that preserve existing behavior (e.g., refactorings, adding new features). When brittle tests produce false negatives, developers must then pay a maintenance cost to rectify the tests. This diminishes the value of tests as a source of rapid feedback on software correctness. Brittle tests risk increasing developer frustration and reducing their confidence in the value of testing altogether. Thus, avoiding brittleness is important for developer acceptance of continuous fuzzing as a critical step.
Testing-only code is also a potential source of brittleness and reduced developer productivity. While some testing-only code is unavoidable, such code incurs a maintenance cost on development, and testing-only code should provide sufficient value to compensate for its maintenance cost. The practice of using conditional compilation to create a fuzzing-friendly build and creating test doubles for fuzzing produces testing-only code that developers need to maintain in sync with production code. Neglecting to maintain testing-only code can mean that the software under test increasingly diverges from production software, resulting in test results that increasingly do not reflect actual software behavior.
For a deeper discussion of brittle tests and its impact on developer productivity, we recommend the Unit Testing and Test Doubles chapters from Software Engineering at Google.
We may find opportunities for easier or more performant fuzzing that come at the cost of brittleness. As previously mentioned, testing only via a public software interface is preferable to depending on implementation details, as tests that depend on implementation details are liable to break under correct modifying changes. Although a software component’s public interface might not be amenable to fuzzing, the public interface might use private helper functions that are easier to fuzz.
Consider the following example:
The public method processRequestFromNetwork accepts inputs from a network connection. Fuzzing the network connection with a network fuzzer is both more difficult and incurs a performance penalty. The private method processRequest appears to be much more amenable to fuzzing, as a fuzz test can directly pass randomized requests into the function. However, fuzzing the private processRequest makes the fuzz test dependent on the implementation of processRequestFromNetwork. Suppose a future code change refactors the isWellFormed check on line 10 out of processRequest and into line 4 in processRequestFromNetwork. The responsibility of checking the request’s well-formedness moves out of the private helper function and into its callers. As a result, the refactoring also requires modifying the fuzz test targeting processRequest to only pass well-formed packets, creating a maintenance cost for the developer implementing the refactoring. Is faster fuzzing without using a network fuzzer worth the cost of added brittleness of depending on implementation details? We do not prescribe a solution, but we encourage those implementing fuzz tests to consider the tradeoff.
Suppose instead we decide to use a test double to simulate network I/O on line 3. Perhaps a test double already exists for regression testing, but the test double is implemented using a heavyweight mocking framework with features that may be useful for regression testing, but only serve to needlessly consume CPU cycles when fuzzing. We can implement a new, lightweight test double for the fuzz test to simulate network I/O. A new test double, however, creates additional testing-only code that developers need to keep in sync with the actual software the double is mimicking. This creates additional maintenance costs.
If performing a one-off fuzzing campaign, we don't need to worry about maintaining our fuzz tests in the face of future code changes. But when continuously fuzzing, we should consider how the fuzz tests will help or hinder development. We urge those implementing continuous fuzzing to consider the dimension of test maintainability and strive to improve developer productivity.
The more software that’s introduced into a vehicle, especially a vehicle that relies heavily on data collection, the more risk is introduced into the overall system. The risk could come from vulnerable code, faulty sensors, or a cybersecurity attack.
Reducing risk through continuous testing is critical. It’s far better to uncover weaknesses in a controlled environment than through a public incident.
Testing doesn’t have to be a drag on development. When done well, security testing practices such as continuous fuzzing serve as a net gain on developer productivity while making software more safe, secure, and reliable.
Laboni Sarker, a Ph. D. student at the University of California, Santa-Barbara, contributed to this research.
Effectiveness and Scalability of Fuzzing Techniques in CI/CD Pipelines. Klooster, Thijs & Turkmen, Fatih & Broenink, Gerben & Hove, Ruben & Böhme, Marcel. (2022). 10.48550/arXiv.2205.14964. Link
Why Go fuzzing? Dmitry Vyukov, Romain Baugue Link
Why (Continuous) Fuzzing? Yevgeny Pats Link
The Importance of Maintainability, Software Engineering at Google,, Link
Unit Testing, Software Engineering at Google, Link
Test Doubles, Software Engineering at Google, Link
|
OPCFW_CODE
|
So here's what I used:
- Lynda.com (www.lynda.com) - a paid subscription based site that is essentially a library of videos for a large variety of subjects, not just programming. Depending on your subscription, you can access different resources such as source files to follow along with the videos.
- Code School (www.codeschool.com) - a freemium site that specializes in programming languages. A lot of their introductory courses are free, and they operate in a achievement/gaming model - you can earn badges for completing courses and get discounts on their intermediate courses if you complete courses.
- Codecademy (www.codecademy.com) - a currently free site that, like Code School, gives you badges as achievements for courses. They're currently building out their site.
- Coursera (www.coursera.com) - a free site that hosts classes from multiple universities online.
We'll start with Lynda, the first site that I used. Lynda reminded me a lot of some of my college lectures - some of the courses were a mix of Powerpoint presentation recordings and demos of the product/code in action. I found myself taking notes on the side and pausing the video at multiple points to digest what I had just learned. But I didn't quite connect with the material as well - rewinding and rewatching portions of the video turned out to be a much more common task than I expected (or perhaps I'm just not the brightest bulb in the box...). Getting around the site itself was pretty okay, but the big problem with Lynda for me was that I just wasn't engaged with the video.
Finally, there's Coursera. Like Lynda, it relied heavily on videos, but I felt like I was actually back in school - the course I took was HCI from Stanford. It was a great refresher course, and I definitely learned a lot of new things. What was interesting was keeping up with the homework and quizzes, especially the peer grading. It forced the students to go through an exercise where they learned what a good and bad homework sample looked like so that when they went back to review their own assignment, they would be (in theory) honest about their self evaluations and fair in evaluating others.
Exploring these sites was a great exercise in user experience, but it also taught me about myself. My learning style was refined along the way, and I found that there were things that worked for me much more. There are many different interpretations of online education, but it's a field that's still evolving as technology keeps growing and more classes become less about the traditional desk and chairs and more about having a decent Internet connection. In the end, I think that the site that worked best for me was Codecademy - the interactive exercises are what did it for me along with the site's clean design and easy learning curve. I highly recommend all four sites though - what worked for me may not work for you though.
|
OPCFW_CODE
|
What does bundle mean in Android?
I am new to android application development, and can't understand what does bundle actually do for us.
Can anyone explain it for me?
I am new to android application development and can't understand what
does bundle actually do for us. Can anyone explain it for me?
In my own words you can image it like a MAP that stores primitive datatypes and objects as couple key-value
Bundle is most often used for passing data through various Activities. Provides putType() and getType() methods for storing and retrieving data from it.
Also Bundle as parameter of onCreate() Activity's life-cycle method can be used when you want to save data when device orientation is changed (in this case activity is destroyed and created again with non null parameter as Bundle).
More about Bundle at its methods you can read reference at developer.android.com where you should start and then make some demo applications to get experience.
Demonstration examples of usage:
Passing primitive datatypes through Activities:
Intent i = new Intent(ActivityContext, TargetActivity.class);
Bundle dataMap = new Bundle();
dataMap.putString("key", "value");
dataMap.putInt("key", 1);
i.putExtras(dataMap);
startActivity(i);
Passing List of values through Activities:
Bundle dataMap = new Bundle();
ArrayList<String> s = new ArrayList<String>();
s.add("Hello");
dataMap.putStringArrayList("key", s); // also Integer and CharSequence
i.putExtras(dataMap);
startActivity(i);
Passing Serialized objects through Activities:
public class Foo implements Serializable {
private static final long serialVersionUID = 1L;
private ArrayList<FooObject> foos;
public Foo(ArrayList<FooObject> foos) {
this.foos = foos;
}
public ArrayList<FooObject> getFoos() {
return this.foos;
}
}
public class FooObject implements Serializable {
private static final long serialVersionUID = 1L;
private int id;
public FooObject(int id) {
this.id = id;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
}
Then:
Bundle dataMap = new Bundle();
ArrayList<FooObject> foos = new ArrayList<FooObject>();
foos.add(new FooObject(1));
dataMap.putSerializable("key", new Foo(foos));
Pass Parcelable objects through Activities:
There is much more code so here is article how to do it:
Parcel data to pass between Activities using Parcelable classes
How to retrieve data in target Activity:
There is one magic method: getIntent() that returns Intent (if there are any data also with extended data) that started Activity from there method is called.
So:
Bundle dataFromIntent = getIntent().getExtras();
if (dataFromIntent != null) {
String stringValue = dataFromIntent.getString("key");
int intValue = dataFromIntent.getInt("key");
Foo fooObject = (Foo) dataFromIntent.getSerializable("key");
// getSerializble returns Serializable so we need to cast to appropriate object.
ArrayList<String> stringArray = dataFromIntent.getStringArrayList("key");
}
Usage of Bundle as parameter of onCreate() method:
You are storing data in onSaveInstanceState() method as below:
@Override
public void onSaveInstanceState(Bundle map) {
map.putString("key", "value");
map.putInt("key", 1);
}
And restore them in onCreate() method (in this case is Bundle as parameter not null) as below:
@Override
public void onCreate(Bundle savedInstanceState) {
if (savedInstanceState != null) {
String stringValue = savedInstanceState.getString("key");
int intValue = savedInstanceState.getString("key");
}
...
}
Note: You can restore data also in onRestoreInstanceState() method but it's not common (its called after onStart() method and onCreate() is called before).
In general english: "It is a collection of things, or a quantity of material, tied or wrapped up together."
same way in Android "It is a collection of keys and its values, which are used to store some sort of data in it."
Bundle is generally used for passing data between various component. Bundle class which can be retrieved from the intent via the getExtras() method.
You can also add data directly to the Bundle via the overloaded putExtra() methods of the Intent objects. Extras are key/value pairs, the key is always of type String. As value you can use the primitive data types.
The receiving component can access this information via the getAction() and getData() methods on the Intent object. This Intent object can be retrieved via the getIntent() method. And
the component which receives the intent can use the getIntent().getExtras() method call to get the extra data.
MainActivity
Intent intent = new Intent(MainActivity.this,SecondActivity.class);
Bundle bundle = new Bundle();
bundle.putString(“Key“, myValue);
intent.putExtras(bundle);
startActivity(intent);
SecondActivity
Bundle bundle = getIntent().getExtras();
String myValue= bundle.getString(“key“);
A collection of things, or a quantity of material, tied or wrapped up together. it is the dictionary meaning...By the same Bundle is a collection of data. The data may be of any type i.e String, int,float , boolean and any serializable data. We can share& save the data of one Activity to another using the bundle Bundle.
Consider it as a Bundle of data, used while passing data from one Activity to another.
The documentation defines it as
"A mapping from String values to various Parcelable types."
You can put data inside the Bundle and then pass this Bundle across several activities. This is handy because you don't need to pass individual data. You put all the data in the Bundle and then just pass the Bundle, instead of sending the data individually.
A mapping from String values to various Parcelable types.Click here
Example:
Intent mIntent = new Intent(this, Example.class);
Bundle mBundle = new Bundle();
mBundle.extras.putString(key, value);
mIntent.putExtras(mBundle);
Send value from one activity to another activity.
Thanks for code sample... very useful
It's literally a bundle of things; information: You put stuff in there (Strings, Integers, etc), and you pass them as a single parameter (the bundle) when use an intent for instance.
Then your target (activity) can get them out again and read the ID, mode, setting etc.
Thanks for you answer Nanne. So they are just like objects? we pass them while starting an activity and receive their values on another activity... true me if i am wrong.
It is an object (it's a certain class) where you can put other objects. You have functions to add Integers, Strings etc with a name, and retrieve them again.
Now I have clear idea thank you so much.
|
STACK_EXCHANGE
|
So I see a link on reddit today to an article entitled No, your code is not so great that it doesn’t need comments. The article site was down so I started reading the comments and was very disheartened to see an actual debate over whether pro developers put comments in or left them out.
For example, we have this from “ckwop”:
I find that the number of comments a developer puts in to their code is inversely proportional to the amount of experience they have.
I believe the reason for this is two fold.
Firstly, as you become more experienced you become fluent in reading code. You’re not expending mental energy trying to grok the syntax and the flow of the program. It just comes naturally.
Secondly, as you become more experienced, you learn how to layout you code in a more readable way. You name classes and variables such that the code is, for the most part, self-documenting.
I don’t quite know how to describe the absolute FAIL in this particular statement. While he does go on to say:
The only comments left in programs written by experienced programmers are comments which document decisions. Why one approach was taken over another, for example.
It still doesn’t redeem himself. Comments like his betray a staggering amount of arrogance that poisons the mind of developers and turn otherwise good programmers into spaghetti-spewing monsters that take on government jobs so it’ll take longer for them to get fired.
This person (and sadly many others) assert that well written code is intuitive. These people are the same people that try and tell me Python is perfect for everything and I shouldn’t work in any other language unless I’m working in the embedded space, in which case I should write myself a Python-To-Assembly script prefixed with the cute little “py”.
Comments are meant to provide a quick overview of code flow and reasoning behind particular decisions. Comments should be clear and concise and allow a developer to spend 5 minutes figuring out where in the code base they need to be to fix the rounding issue rather than spending 40 minutes tracing program flow.
Comments are meant to help developers who either have limited experience in a language, who are new to a language, or who haven’t used a particular language in quite some time. Your Haskell code may be clear and simple to you, but I can’t read a single damned line of it: comment it!
Comments are meant for the developer who puts down a project to focus on something else for a few months only to pick the original code back up. The sign of a mature developer is one who accepts the fact that when he puts down code for a few days, he forgets the “why”. No developer is immune to this, and anyone that tries to tell you they are different is not worth the time you wasted listening to them.
So, to “ckwop” and all the other developers out there that think code should speak for itself I have a few things to say:
- If code could speak for itself, it wouldn’t be called code
- You will not remember your design decisions, and in some cases even code flow, after a month
- Comments make it much easier for me to debug when your company hires me to fix your failures both as a programmer and as a human being.
In closing: The debate has long been over, comment your fucking code.
|
OPCFW_CODE
|
An Undergrad's Guide To Salary Negotiation
Note: the information below is based on my personal experiences and some of my friends whom I have talked to recently. This post is focussed on Graduate Software Engineer or SDE-1 roles.
Eventually there comes a point in your career where you will have face questions like:
- "How much salary do you expect for this role?"
- "How much salary was/is your past/current employer paying you?"
- "How much did your friends get in their PPOs?"
To be able to answer these questions you need to be able to:
- Be aware of the salary trends across different companies for the role that you are applying for.
- Have a little confidence in yourself, if they are looking to hire you there is always some room to negotiate a higher salary.
Firstly you need to do some research as to how much various companies pay engineers fresh out of college. I mostly use Glassdoor to look these up but you should also try Linkedin's salary insights and Paysa for companies based abroad. Most of these salaries can be average or outdated so consider these numbers as an approximation. You will see roughly 4 base salary brackets:
- 4-8 lakhs (these will be mostly mass recruiters like Infosys, Dell, I remember seeing one DirectI posting with a 6 lakh salary which was shocking)
- 8-12 lakhs (this space would be mostly filled by young Indian startups that have acquired some initial funding)
- 15-20 lakhs (established Indian startups, open source startups, and Giants like Apple, Microsoft, Amazon; got these numbers from Linkedin salary insights)
- 20+ lakhs (mostly unicorn startups, or established remote open source companies that can pay you in $$$)
Category 1 companies are mostly evil so salary negotiation is out of the question. Since I only have experience of working/interviewing at companies belonging to the other 3 categories I will be talking about them instead. Category 2 companies will probably not be able to give you a higher salary than what they have already offered since they are usually struggling to survive and trying to burn as little money left in their account. You can always try negotiating but there is little hope in this case.
Category 3 and 4 is where you have some room for haggling. Most of these companies will have a solid VC funding or a steady stream of revenues or both and the team size will be small compared to the scale at which they operate, it can be anywhere from 20-300 people in the org, so they try to hire the best engineers and are willing to pay competitive salaries. Salary structure at these companies consists of
- Base salary (what I have listed above): This is the most important since this will be your steady source of income. You might be asked questions like the ones listed at the start of this post. If you have been interning at such companies and they want to convert you to full time or if you have been interviewed by them and they are interested in offering you a job then you know that you are worth 15+ or 20+ lakhs and can ask for that much. If you have been interning at this place then try asking any of the current employees their salaries (most of them won't tell you but still try) if you are friends with them or interns who might have been offered the job, this will help you make sure that the offer that you are about to receive is the same as your peers. If it isn't, if you were offered a lower salary than your peers without any explainations then this can be a red flag, since the company is not transparent about their salary policy. So try and negotiate and ask them if the 15 can be turned to 16-17 lakhs.
- Health insurance: You can simply buy your own health insurance and it won't cost you much but if your companies usually offer good plans that can cover you and your family and are around 3-5 lakhs. So it's not really that important but if you are ever stuck between similar offers (assuming that you are interested in the work being done by both companies) you can take insurance as the second most important factor to make the decision.
- Equity: If your company hasn't gone public yet then this is of zero value. You may never know if your company will go public or be acquired by some other larger company so you will probably never be able to turn these shares into cash. In case you are working for a large conglomerate like Amazon or Google which are already public, the equity that you will be offered will be worth a lot but will have certain restrictions for e.g. you will only be able to cash it out if you work at the company for
Xnumber of years. So if you don't see yourself at this company long term then don't really worry about equity.
- Other benefits like reimbursement for books, courses, gym, etc. With the kind of money that you will be making these won't really matter, treat them like free goodies.
Note: Large conglomerates publicize their compensation packages including all of these benefits and stock options along with the base salary. So in case someone tells you they got a 40 lakh package at Microsoft that actually translates to 18-20 lakhs of base salary + rest in stock options and insurance (got to know this from a friend of mine who got the exact same offer). The same goes for all the other 50 laakh and 1 crore packages that keep hearing about in the news.
So that's it, you should know what you are worth, keep tracking the salary trends in the market and in your network of developers and don't be shy/afraid of negotiating for a better package.
|
OPCFW_CODE
|
How far do you need to go when lending a hand to a neighbor?
A reader we’re calling Monty who lives in New England wrote that he regularly likes to help out his elderly neighbors. Sometimes the help involves shoveling their sidewalks and steps after a snowstorm. Occasionally, he will help carry bags of groceries when he sees a neighbor unloading the car. Monty wrote that he likes to help out someone who needs the help and he also does it because he would like to think his neighbors would do the same for him.
Even when the help has gotten a bit more involved and included use of tools to put together a piece of furniture or to saw up a fallen tree branch after a storm, Monty has stepped in.
But Monty wrote that while he enjoys helping out, he doesn’t like to linger and “chit chat” after the work is done. And this is where Monty’s question about the right thing to do comes into play.
“One of my neighbors always insists that I sit and talk once the chore is done,” wrote Monty. “He’s a bit older and can’t do some of the things he used to do himself, so he gives me a call and I go over to help. I’m glad to help him out.”
Monty indicated that as the work is being done, he and the neighbor engage in long discussions about everything from the neighborhood and sports to politics and personal finances.
But whenever Monty tries to pack up and go home after the work is done, this neighbor insists he stay and talk for a while more. Occasionally, when Monty says he has to go home, the neighbor will respond with something like: “So now you’re too good to sit and talk?”
“I don’t want to insult him and I don’t want to feel bad about leaving,” wrote Monty. He just doesn’t enjoying sitting around chatting when he could be doing other stuff. “Is it wrong for me to tell him that I’m glad to help, but I don’t want to hang out and talk after the work is done?”
There could be all sorts of reasons Monty’s neighbor wants to continue talking. He may be a genuinely gregarious person. He may also be lonely and crave company. But Monty has no obligation to stick around and talk if he doesn’t want to. That he regularly responds to requests for help, seems to enjoy helping out, and talks with his neighbor while the work is being done is a good thing and suggests Monty is a good neighbor.
The right thing for Monty to do is to thank his neighbor for the invitation to sit and chat, but to decline the offer if he really would prefer not to. Monty should feel no guilt or remorse about doing this. And the right thing for Monty’s neighbor is to refrain from the comments that suggest Monty is doing something wrong by not wanting to sit around and talk. That Monty took the time to help should be more than enough to suggest to his neighbor that he cares about him enough to want to help.
Jeffrey L. Seglin, author of The Simple Art of Business Etiquette: How to Rise to the Top by Playing Nice, is a senior lecturer in public policy, emeritus, at Harvard's Kennedy School. He is also the administrator of www.jeffreyseglin.com, a blog focused on ethical issues.
Do you have ethical questions that you need to have answered? Send them to email@example.com.
Follow him on Twitter @jseglin
|
OPCFW_CODE
|
Writing this blog today reminds me of how different a world I'm in. Back home, I'm sure images of the towers falling pervades every channel, and the front cover of the New York Times has every story dedicated to the tragedy. Here we are so completely removed. No news, no TV, and only ten Americans.
But, I'm getting ahead of myself...
We began in the JFK airport, where we met up and boarded an eight hour flight for Züruck, Switzerland. The plane service was fantastic! I watched two movies and had a delicious dinner. When we arrived we stayed at the airport for about two hours, and then boarded another eight hour flight for Nairobi. Spending all that time on airplanes was pretty disorienting, and it would've been close to intolerable if the plane service wasn't so excellent.
Then, we stayed at a hotel in the suburbs of Nairobi and ate dinner. It was dark, so we couldn't see very much of Nairobi, but what we did see was incredible. The highways were totally packed with people (driving on the wrong side of the road!), and the driving was so crazy that all of the stop lights were deactivated, because no one would pay attention to them.
We flew to Lodwar to following day, which was quite a different experience. We went through security who didn't make us spill out all of our water, and got on a small propeller plane.
The two hour flight felt like a roller coaster at times, but we managed to land safely. The weather getting on the plane must have been high 60s, and rainy. The weather getting off was about 100 degrees, dry, and sandy. The culture difference was really too much to describe. Everyone was living in grass huts, and brightly dressed women were walking around carrying things on their heads. People gathered around us trying to sell us things, and goats were wandering around the town at will.
The drive from Lodwar to TBI was about an hour, on (very!) bumpy dirt roads. The level of vegetation and human settlement gradually diminished to nearly nothing, as we drove further and further into what I can only describe as the middle of nowhere. Eventually, we came upon the compound of buildings that is to be our home for the next ten weeks. It's situated near a big muddy river, which provides water for all of the surrounding trees.
Here's our dorm:
We were immediately served a delicious lunch and shown to our rooms. We each get our own room, our clothes are washed for us, and our beds are made for us. So it basically feels like living in a resort. The heat is intense, but so far it's manageable. One just cannot be lax about drinking water, and sunscreen is a must. We slept outside on the veranda in mosquito nets, and I heard people chanting and beating drums in the distance as I dozed off.
My computer gave out last night, which explains why I'm posting this a day late. It seems to be working alright now. My plan is to post something every Sunday that I'm here.
This is #1!
|
OPCFW_CODE
|
UPDATE: Scroll down this thread for a discussion, but the gist of the findings are that the Scorpion V2 is driven high enough on Max to trip the PTC safety circuits of most primary CR123A cells, when run over a sustained period. Although external cooling helps delay this effect, it is still not something you routinely want to do.
The second point is that there is a marked difference in the behaviour of made-in-the-USA Duracell/Panasonic cells compared to made-in-China Titanium Innovation cells. The Duracell/Panasonic cells appear to heat up far more quickly than the Titanium Innovations cells in this situation (and thus trip their PTCs earlier). Again, scroll down for a greater discussion, with comparative data from my testing and HKJs.
In some of my high-output lights tested on 4xCR123A, I have noticed an interesting step-down pattern at various points into the run. For example, see the Thrunite Catapult V2 SST-50 and Olight M31 SST-50 runs in this review:
Most lights don't show this behavior, and I always thought it was some sort of thermal shut-down sensor in the circuit in those lights (i.e. responding to increased temperature due to the resistance of CR123As).
It seems the Thrunite Scorpion V2 is showing the same behavior, but it is quite variable depending on the battery brand. I note that this light is not reported as having a thermal sensor.
Here is a comparison in the Scorpion V2 on Max using my standard Titanium Innovations CR123As and Panasonic made-in-USA CR123A, with and without external cooling (i.e. a small fan near the light). Note the Scorpion V2's LED should be driven ~2.5A on max (in its continuously-variable mode).
The Titanium Innovation cells perform consistently for a lot longer - with cooling, there was no sign of a step-down (and even without cooling, it only happened near the end of the run).
Cooling also didn't make much of a difference on the Panasonic runs - except I got barely ~5 mins to step down without it, and ~6 mins with it. The only real difference cooling seems to make is in the rate of the recovery phase.
So it would appear that the Panasonic cells can't handle the current load and heat up faster than the Titanium Innovations cells (or their PTC circuits kick in faster?). This is turn either triggers some sort of stepdown circuit/resistor, or causes some sort of battery "hiccup".
Thrunite informs me that they use a resistor which varies with temperature, so that when the temperature is high, the resistor will drive the current lower. As the temp drops in the CR123As (i.e. while being driven less hard), output gradually recovers. That makes sense to me on the recovery side, but I don't quite see why the drop-off would be so rapid.
Any ideas as to what is going on here? I'm also curious as to the rather large difference between the two brands.
EDIT: I should add that the ambient room temperature rose during the course of the day as I was doing these runtimes, which may be a contributing factor between the brands (although I would think a small one). It was ~22C room temp for the cooled Titanium Innovation cells, ~23C for the uncooled TI cells, ~24C for the uncooled Panasonic, and ~25C for the cooled Pannies.
|
OPCFW_CODE
|
Functionality of a horseshoe orbit around a gas giant in a binary star system
So I'm starting a book and this idea popped in my head. Horseshoe orbits are really cool and all, but what if you throw in another star. Of course it did have a reason, I want my main civilization that I'm focusing on to have astrology deeply ingrained in their culture and peoples.
As such I want to fill the sky with as much as possible. But I haven't been able to find anything on this. The binary stars might not even affect anything, but I wanted to make sure before going further. Again I haven't found anything saying that the formation is impossible, but I also haven't found anything saying otherwise.
A horseshoe orbit only appears to be a horseshoe from the point of view of another body which shares a similar orbit with it. Both bodies are in perfectly normal elliptical orbits. If elliptical orbits around the gas giant are possible, then "horseshoe" orbits are too.
A horseshoe orbit is a co-orbital configuration, so if a planet is in a horseshoe orbit relative to a gas giant, both the smaller planet and the gas giant are actually orbiting the parent star.
Basically there are two ways to get a binary-star system with stable planetary orbits: the two stars are very close (and planets orbit them as a single source of gravity) or one star has the planets and the other is comparatively very distant.
In general, multiple systems tend to be "divisible" into two-body systems.
For example, Alpha Centauri is two close stars, A and B, orbited by a vastly more distant star, Proxima, which has a planet.
the A/B system is largely unaffected by Proxima
Proxima orbits A+B as essentially a single gravity source
Proxima's planet orbits Proxima largely uninfluenced by A/B
If you make the second star very distant, it wouldn't be visible as a "sun disk", but would be an incredibly bright point of light. (A Sun-like star at 1,000 AU would be 1,000,000 times dimmer than the Sun as seen from Earth - between the brightness of a full moon and a half moon, but concentrated into a single point.)
[The Sun is about 400,000 times brighter than the full Moon, as seen from Earth, and (counter-intuitively) the full Moon is much more than twice as bright as the half Moon.]
Not directly related, perhaps, but these sorts of configurations may not be stable over really long timespans (various asteroids seem to wander in and out of co-orbital configurations with Earth), so if a habitable planet is now in a co-orbital configuration with a gas giant, it may end up somewhere else 'soon', possibly outside the habitable zone...
I am rather sure that the only way to make it work is to have the binary stellar system to not be one where the two stars are closer than the giant gas is to the main star.
If the second star orbits the first one from far away, it might have negligible influence on perturbing the orbital motion of the horseshoe orbiting body.
If the second star orbits the first one closely, I have the impression that the gravitational perturbation won't let the horseshoe orbit exist for very long time.
Horseshoe orbits are normal elliptical orbits, with the "horseshoe" only occurring from the perspective of the two objects. Binary star systems can often be treated like normal solitary stars, by using their common center of mass (barycenter) in the same way you would use a single stars' center of mass.
The only exception would be if the planets were orbiting one star rather than both (i.e. an "S-type" orbit); since you want a habitable planet, this would probably only happen if one star was much larger than the other, or the two were (relatively) distant. Even in this case, the planets could have stable elliptical orbits and thus be in a horseshoe orbit with regards to one another.
The horseshoe movement is purely subjective to the companion planet, but yes it would make for GREAT mythology. The time between meetings is great, many years to a few centuries. It is regular, and predictable, and has a cyclic nature longer than human lifespan, possibly longer than accurate oral traditions.
"My son, in the time of ENkwa, when my grandfather's grandfather walked the plains, the Blue Star did show in the evening. And it blessed the rains, and the Tribe flourished. But then did the evil T'Ka'Wa insult ENkwa, and he withdrew from us, fading away in the twilight.
Yet LO! The Blue Star approaches again, but now it appears in the morning. Does ENkwa return to bless us again, or does He bring death, and destruction, and the wailing of maidens, and bad breath, and the sour corncob, for He approaches not from his usual place, but from {gasp} The Other Side?"
Ok, great setting. Now, can we make this work in a Binary solar system?
As others have mentioned, there are three scenarios.
The second star is way far out of the solar system. The orbital period of the planets are many,many times quicker than the orbital period of the binary star system.
In this case, the second star is a very slowly migrating beacon in the sky, and has no short or long term deleterious effect on the planetary orbits, including the somewhat delicate Horseshoe orbits.
The second star is a very close binary, and the planets orbit around the common barycentre of the two central stars.
In this case, you have reasonably normal day/night cycles, with two suns visually close to each other. Colorful double shadows may prevail.
The two stars orbit each other with a period much shorter than planetary orbital times, thus again they have negligible effect on the planets, even such delicate things as horseshoe orbits.
The stars orbit each other with a period somewhat similar to planetary orbits.
This will disrupt delicate orbits like the horseshoe orbits.
It will also disrupt robust orbits, like the moon circling Earth.
Actually, is is better named not as "Solar System", but "interstellar Bowling Alley" as the orbits of all the planets with orbital periods of the same magnitude as the stars' orbits around each other, will eventually in the short term of astronomical time be flung out of the system as a whole. 3-body gravitational systems are not stable if the mass of 2 or more of the bodies are comparable. Only when you have a tiny orbiting a medium, which is orbiting a huge is the setup reasonably stable.
So really:
If your system is stable enough for a normal planetary orbit to exist, and is not a special case with the planet stuck in a Lagrange point or in resonance with the stars, then it is stable enough for an additional horseshoe orbit intersecting that planet's path.
|
STACK_EXCHANGE
|
Dr. Melissa Haendel
Keynote Title: “Not everyone can become a great artist; but a great artist can come from anywhere”: Envisioning a world where everyone helps solve disease
Elucidating disease and dysfunction requires understanding how genotypic variation relates to phenotypic outcomes. Researchers produce data that are collated to generate hypotheses and novel discoveries, which feed into the clinic, further driving basic research. It is a beautiful cycle, but not all data is created equal along the way and this process can take years and years. Data integration is a key†challenge as phenotype data is largely unstructured and is encoded in a variety of formats. Since we only know the functional consequences of mutation for less than 40% of the human coding genome, maximizing use of these data is critical. Semantically structured phenotype data can be used algorithmically to shed new light on how biological systems function across time and scale. The use†of cross-species anatomy and phenotype ontologies can be combined with genomic analysis to support disease diagnosis. Who is in charge of assisting with the creation of such an infrastructure? The answer is all of us – the clinician, the basic researcher, the author, the biocurator, the informaticist, the publisher, the patient, and more. Here, we explore the bucket brigade that it takes to support a semantic disease discovery framework.
Bio: Dr. Haendel is an Associate Professor in the Library and the Department of Medical Informatics & Clinical Epidemiology at the Oregon Health and Science University (OHSU), where she directs the Ontology Development Group. She is the principal investigator of the Monarch Initiative and is an active researcher in ontologies and data standards. Melissa is known for her work on biomedical resource discovery, such as in the eagle-i discovery system and the Resource Identification Initiative, and for her work on anatomy, cell, and phenotype ontologies such as Uberon and the Human Phenotype Ontology. She holds a Ph.D. Neuroscience from the University of Wisconsin and completed postdoctoral training at the University of Oregon and Oregon State University.
Dr. Bijan Parsia
Keynote Title: Representing All Clinical Knowledge
Clinical knowledge is voluminous, heterogenous, intertwined, rapidly evolving, and, if we can get it to the right place in the right way at the right time, saves lives. However, in spite of noble and useful efforts at standardisation (think SNOMED-CT), our formal (in the sense of machine processable) representations of medical knowledge remains, if we are charitable, extremely rudimentary. Most useful representations are bespoke, ad hoc, of questionable correctness, and of dubious utility (and these are the useful ones!). Even among experts in narrow areas, synthesising current research into practice or mere practice oriented recommendations is challenging and extremely labour intensive. Disseminating that synthesis widely is backbreaking and surprisingly ineffective. Building medical information systems that reflect, embed, and mobilise our best medical knowledge remains largely a fantasy. Knowledge representation is supposed to help! We have no shortage of possibly helpful technology from RDF and Linked Data, to OWL ontologies, to executable guidelines, to Bayesian networks, to standardise messaging frameworks and beyond. And yet, none of these technologies (or their sum) seem poised to crack the nut. In this talk, I discuss the very idea of representing all clinical knowledge, what it would mean, what the challenges are of doing it, and what benefits, if any, we might hope to achieve.
Bio: Dr. Bijan Parsia is a Reader in the School of Computer Science at the University of Manchester. His research focuses on the design and use of logic based knowledge representation, especially those involving description logic based ontologies. He was involved in the OWL 2 Working Group and co-edited several of the specifications. He lead the design and development of the popular OWL reasoner Pellet and the SWOOP editor. He has made notable contributions to reasoner optimisation, extensions to OWL, explanation services, modularity, and empirical ontology engineering. He cofounded the OWL: Experiences and Directions workshop series which provided the impetus and design for OWL 2. He also cofounded the OWL Reasoner Evaluation competition. Currently, he is collaborating with Cerner HS on next generation EMR systems and with Elsevier on question generation for postgraduate medical education.
|
OPCFW_CODE
|
A little over a decade ago, Oracle followed what seemed like every other tech company and filed suit against Google. While it was the popular thing to do at the time, Oracle's suit was different from everyone else's. Oracle claimed that, in Google's implementation of Oracle's Java programming language for Android, they duplicated protected API interfaces. Google argued that the API surfaces, or the outer structure that defines the methods and arguments for those methods, were publicly available and, therefore, not protectable.
This is an unbelievably complicated topic that requires further explanation. In particular, let's discuss some of the terms which have been at the center of this court battle. The first and most important term is API. An API, or Application Programming Interface, is a collection of code with publicly exposed methods for a system's operation. Generally, when we discuss APIs, we talk about web APIs also known as web services. A lot of web services are intended for public consumption, and so their structure is publicly known.
In the Oracle and Google case, the API is very low level, including basic math functions. The idea of creating a method to find the maximum value or a number is far from unique. However, creating a class called Math in a namespace named java.lang, with a method named max with two arguments, both doubles, named E and PI, is unique. The structure was designed by the Java language team, which Google duplicated in order for Java to be used to develop for Android. That's because Oracle does not make the implementation of Java available to the world. But, because of the nature of APIs, the surface is publicly documented.
The programming surface is the most outer aspect of the API. It is the part that is exposed to programmers. It includes the namespaces, the classes, the methods, and the arguments. Even with fully compiled and signed code, the programming surface is exposed. There is no way for an API to be useful if you cannot discover, one way or another, the structure of the surface. Everything under the surface, however, is protected. That means that the implementation is private, and the code behind the implementation is copyrightable. The case, however, is all about whether or not the surface itself is also copyrightable.
Over the past ten years, this case has seemingly landed in every court possible. This week finalized that legal tour, with Google and Oracle arguing their cases in front of the final word in US law - the United States Supreme Court. The court, which currently has 8 sitting Justices, heard the case presented by both sides and will before the end of their current term. Under normal circumstances, there are two likely outcomes, but under these current circumstances, there are three.
The first likely outcome is that Oracle wins outright, setting a national precedent in favor of code protection. This would mean that developers can protect their general program code, as well as their interfaces. The second likely outcome is that Google wins outright, setting a national precedent in favor of code ubiquity. This could mean that, not only are programming interfaces lose protection, but code as a whole could as well. The third possibility, which is created by the even number of Justices, is a tie. In this fairly unusual scenario, the ruling would default to the previous court's ruling, but would not set a national precedent, meaning that different jurisdictions could relitigate the case. The previous ruling was in favor of Oracle.
In addition to national precedent, which is important to the developers of the country, Oracle is looking for a licensing fee to subsidiary Sun, which is responsible for the Java ecosystem. Sun licenses the Java language and ecosystem as its primary business model. A decade worth of overdue licensing fees for billions of instances of Android across the world could cost Google billions or tens of billions of dollars.
With a case this important, being argued in front of the most important court in the country, with potential repercussions beyond what we currently understand, you would expect that both sides would bring their A-game. However, Google appears to have gone a different direction. In fact, legal scholar James Grimmelmann of Cornell University said of Google's attorney Thomas Goldstein,
He did an abysmal job. At the level of nuance he was willing to get into, his case was a loser. The only way to make it stick is to be nuanced about what it means to declare code.
Nuance was the key to this case. How do you argue that some parts of a software system are protectable but other parts are not? You have to describe in nearly excruciating detail just how different these two aspects of software are. But Goldstein's primary argument was that an API method like Math.max was a "method of operation" because programmers "operate" the Java language through invocations of these interfaces. This seems like an important distinction because Section 102(b) of the Copyright Act, the centerpiece of the case, states that an "idea, procedure, process, system, method of operation, concept, principle, or discovery" cannot be copyrighted.
Google's argument falls apart immediately, however, because any function, including private functions, can be described the same way. That would mean that no software could ever be copyrighted, meaning that software clones (which is where someone steals the code of an app and publishes it under their own name) would be entirely legal. It also means that the entire law, which is intended to allow developers to protect their work, has zero value. Justice Samuel Alito summed this up well, saying,
I'm concerned that under your argument, all computer code is at risk of losing protection under 102(b). How do you square your position with Congress' express intent to provide protection for computer codes?
Goldstein was not able to articulate a position in which the two types of code were different, essentially undermining his own case. Justice Brett Kavanaugh, the last to ask questions because of his position on the court, brought up that Oracle's lawyer said,
...declaring code is a method of operation only in the same sense that computer programs as a whole are methods of operation, and that therefore your method of operation argument would swallow the protection for computer programs.
He followed up by asking Goldstein,
You're not allowed to copy a song just because it's the only way to copy a song. Why is that principle not at play here?
The Court has not yet ruled but will need to by the end of their current session. There is no clear indication which way the court is leaning, though at least some of the Justices seemed unconvinced by Google's arguments.
|
OPCFW_CODE
|
1. A normal (non root) user has which type of access to the /etc/passwd file?
- Read only
- Execute, read, and write
- Read and write
- Write only
2. The /etc/passwd file can be modified by which of the following?
- Remote user
- Group account
- Any user
- Root user
3. A normal user, joe has the following record in the /etc/passwd file. What does the x indicate?
- Password for user joe is locked
- Password for user joe is x
- joe has not yet set his password
- The encrypted password has been stored in the /etc/shadow file
4. Refer to following record in the passwd file:
UID and GID for test_user are, respectively:
- 520 and 521
- 21 and 20
- 521 and 520
- 521 and 1041
5. Which of the following info about users is not present in passwd file record?
- The user’s shell
- The user’s account name
- The user’s home directory
- The user’s password
6. Regular users can modify their GECOS information using which of the following commands?
7. Which of the following commands CANNOT be used to update user information in the /etc/passwd file?
8. The root user can convert the password of any user into readable text form.True or False?
9. Which of the following information is not stored in the /etc/shadow file?
- Default home directory
- Encrypted password
- Days since last password change
- Maximum number of days remaining for the password to expire
10. A regular user wants a password that never expires. The Maximum field value in the /etc/shadow file should be set to:
- No value
11. The system administrator wants to create a temporary user account to be used for only five days. Immediately after five days, the account should become unavailable for login. The following values should be set for the temporary user in the shadow file:
- Expire date=today+5
12. In the /etc/shadow file, a user record has !! as the first two characters in the encrypted password field. What does this signify?
- The password has !! as the first two characters
- The password is about to expire
- The account is locked
- No significance, it is encrypted text
13. The system administrator wants to make the password empty for user Don. He should execute the passwd command with which of the following options?
- –a Don
- –d Don
- –l Don
- –u Don
14. A Linux administrator modifies the values of UID_MAX and UID_MIN parameters in the /etc/login.defs file. This in effect will allow her to:
- Assign UID and GID within the specified range for all users
- Assign GID within the specified range for new users
- Change password expiration policy for existing users
- Assign UID within the specified range for new users
15. The research team for ABC Corp is being increased from one to five members. What is the best way to add new team members to the server?
- Create a group named research , with each member given an account belonging to the group.
- Create a new login account named research that all new members share
- Create individual users and assign them root privileges
- Add new users with blank passwords
16. A user is a member of three different secondary groups. Which file will contain information regarding his membership in these secondary groups?
groups command displays your current primary group as the first group while the
getent command will always display the default primary group.
True or False?
18. An account on a Linux system has a UID of 50. Which type of account is this?
- Temporary account
- System account
- Regular user account
- Group account
19. Which option, when used with
useradd command, will display the default options that are used when creating a new user?
20. A new user was created using the command
useradd steve. The command
grep steve/etc/shadow will show the password field for this account as:
21. Running the command
useradd ben on a system with User Private Groups will create:
- A user ben and a group ben both are created
- Only a new user ben is created
- A user ben and login directory is created
- Only a group ben is created
22. Changes in the /etc/skel directory will apply to:
- All existing users
- None – the /etc/skel directory has no effect on user accounts
- New users having their home directory in /etc/
- New users created using useradd command
23. The command
passwd –S test_user produces the following output:
test_user NP 03/28/2014 0 99999 7 -1
What information does this convey regarding the password status of the test_user account?
- There is no password
- Locked password
- New password stored in /etc/shadow
- Set but non-printable password
24. Using the command
passwd –e test_user, the root user can force a password change in the next login attempt.True or False?
25. The command
chage CANNOT be used to:
- Delete a user
- Update information related to password expiry
- View password aging policies
- Enforce a password changing and expiry policy for specific user accounts
26. Which of the following commands will set the grace login period of 10 days after the password has expired?
- chage –m 5 temp_user
- chage –I 10 temp_user
- chage –E “2014-06-01” temp_user
- chage –W 10 temp_user
27. User lori reported that her account is locked. Which of the following commands will unlock her account?
- usermod –L lori
- usermod lori –e 2014-04-30
- usermod –U lori
- usermod lori –g sales
28. The system administrator wants to delete the user account joe. However, joe is already logged in to the system. Which of the following commands will allow the administrator to delete joe?
- userdel –r joe_user
- usermod –d –f joe_user
- userdel joe_user
- userdel –f joe_user
29. Which of the following commands will allow the root user to create a new group for the sales department and assign this new group the GID of 1250?
- Specific GID cannot be assigned
- groupadd sales_group 1250
- groupadd sales_group –g 1250
- groupadd sales_group –n 1250
30. The system administrator notices that a file shows a numeric group id 1508. What does this signify?
- Group name corresponding to GID 1508 is deleted
- The file belongs to a system group id
- Group name is 1508
- The system has been hacked
Download PDF File below:
|
OPCFW_CODE
|
feat(chip): filter chip trailing icon can now be customized
I made a preliminary commit just to show the course it's taking.
A slot was inserted directly into the renderRemoveButton method so everything works as intended with the keydown events and focus logic.
Another advantage is that md-input-chip will also inherit this change so users can use a different remove trailing icon if they want.
I think it's going well,
I added a getter to determine if user inserted a customized trailing icon, I then use this flag to block the default remove event, rather than blocking we should probably dispatch another custom event instead, trailing-icon-click for instance
I will wait for your feedback, now.
Also let's check in on the motivation for this change. Do we want to replace the remove button's icon with something else, like a trash can? Or do we want to replace remove functionality with something new, such as a dropdown icon?
@asyncLiz It seems that the more general the better in this case, I see some cases where one developer would want to do something more than just removing or showing a dropdown icon (for instance showing an "info" icon that opens a small dialog to give more info about a special type of filter)
I agree! Let's tackle that separately then, and scope this to just customizing the remove button's icon :)
I'll update the pr title to reflect
@asyncLiz Ok so if I understand you want to give the ability to change remove icon and later add another slot so users can insert anything at the end?
That means this code
<md-filter-chip
removable
>
<md-icon-button slot="trailing-icon">...</md-icon-button>
</md-filter-chip>
will end up showing two different trailing elements, is that ok?
will end up showing two different trailing elements
I'm thinking that removable will not have an effect if a user provides a new trailing action.
<!-- Only the new icon button shows, not a second one. `removable` has no effect -->
<md-filter-chip removable>
<md-icon-button slot="trailing-icon">...</md-icon-button>
</md-filter-chip>
<!-- Remove button shows with different icon -->
<md-filter-chip removable>
<md-icon slot="remove-trailing-icon">trash</md-icon>
</md-filter-chip>
<!-- Only the new icon button shows, custom remove trailing icon does not. -->
<md-filter-chip removable>
<md-icon slot="remove-trailing-icon">trash</md-icon>
<md-icon-button slot="trailing-icon">...</md-icon-button>
</md-filter-chip>
Just to make sure we are on the same page, the modifications I provide allow this writing:
<!-- show "remove" trailing icon button by default -->
<md-filter-chip
has-trailing
@remove=${/* executed when remove icon button clicked */}
></md-filter-chip>
<!-- user can change the default -->
<md-filter-chip
has-trailing
@remove=${/* won't get executed */}
>
<md-icon slot="trailing-icon">arrow_drop_down</md-icon>
</md-filter-chip>
That is correct, the remove event will not fire with an entirely new custom trailing-icon slot. It would fire with a remove-trailing-icon slot.
@asyncLiz Last attempt. I don't want to bother I just studied the code thoroughly and tried different solutions, just making sure you fully understand this approach.
I renamed removable to has-trailing, has-trailing here basically serve as a switch for show the trailing icon which default to the "remove". User can then use trailing-icon to change the appearance of the icon.
get hasSlottedTrailingIcon() is used to determine if a slot content was provided but only during click event which won't compromise SSR. But ideally why not just dispatching a global event instead so in the end we could end up with a more intuitive syntax, e.g.
<md-filter-chip
has-trailing
@trailing-click=${this.#openChipMenu}
>
<md-icon slot="trailing-icon">arrow_drop_down</md-icon>
<!-- default slot -->
<md-menu>...</md-menu>
</md-filter-chip>
Another advantage for this syntax is all the focus-ring navigation logic you wrote still act as intended and do not need additional code.
We use has-* attributes specifically for SSR workarounds, so it'd be nice to keep that consistent rather than adding functionality with one of them. In general, most users shouldn't need to add has-* attributes unless they start seeing a FOUC. One day we'll remove them entirely once CSS adds :has-slotted().
I'd prefer to keep removable so it's more explicit to users what feature they're adding. It's also a breaking change if we remove it, and we want to stay on and support 1.x for a while.
I think an event like @trailing-click and attribute like trailing-icon-action or a slot="trailing-icon-button" is likely the right approach, but I'd like some more planning before we add it. For now, let's scope this PR to just customizing the existing remove button's icon with a slot="remove-trailing-icon" so we can get it in faster.
@asyncLiz Ok it's clear now, thanks for the heads-up. I'll complete this PR and probably leave the trailing-icon slot part to you because I'm not comfortable with the focus and aria logic yet.
Something worth bringing to your attention, when using a custom remove icon using close value to emulate the default appearance, results look different:
(top: default, bottom: using close icon)
Something worth bringing to your attention, when using a custom remove icon using close value to emulate the default appearance, results look different:
(top: default, bottom: using close icon)
Can you file a separate issue for that?
|
GITHUB_ARCHIVE
|
CPU resets execution of custom C kernel when accessing global variable
Hope you are doing well during confinemet.
As a brief introduction: I write this question not as Tretorn, but as a member of ASOC, an association of computer science students from Spain. We are trying to learn how kernels work by developing one all by ourselves. We have te all the code (related to question) published in https://github.com/TretornESP/ASOC2
As we are around 30 students, we have splitted the job into: Filesystem, Memory, User-space software (merely shell), Process management and Bootloader.
When we are one month into developing, we have run into some real trouble with this last part, so here i am asking you for some help about the subject.
The steps we follow since the moment BIOS sets us up on 0x7c00 are:
Set CS by performing a far jump (Thanks Mr. Petch!)
Save main disk
Set 16 bits stack
Load 8 sectors from disk (2nd stage)
Load 32-Bits GDT
Load IDT
Enable protected mode (CR0)
Perform a far jump to flush pipeline and set CS
Set all segments to the same (Flat segmentation)
Set new stack to 0x90000
Detect LM presence via CPUID
Disable old paging (We didn't set up paging in PM)
Set up page tables
Activate PAE
Activate LM
Enable paging
Load GDT (Same as in 32 bits, maybe in compat mode?)
Far jump for same reasons as above
Update segments
Reset stack
Call C kmain via extern kmain call kmain
Our linker script is:
OUTPUT_FORMAT("binary");
ENTRY(boot);
SECTIONS
{
. = 0x7C00;
.text : {
*(.boot);
*(.text);
}
.rodata : SUBALIGN(4) {
*(.rodata);
}
.data : SUBALIGN(4) {
*(.data);
}
.bss : SUBALIGN(4) {
*(.bss)
}
/DISCARD/ : {
*(.eh_frame);
*(.comment);
*(.note.gnu.build-id);
}
}
we compile with:
nasm -f elf64 boot.asm -o boot.o
gcc -ffreestanding -nostdlib -mno-red-zone -fno-exceptions -m64 kernel.c boot.o -o kernel.bin -T link.ld
and test with:
qemu-system-x86_64 -fda kernel.bin
Our kmain kinda writes to VGA if we set the function like this:
void kmain() {
char * vga_ptr = (char*)0xb8000;
*(vga_ptr++) = 0x0f;
*(vga_ptr++) = 'H';
while (1){}
}
But if instead i declare vga_ptr outside of kmain bounds, the cpu resets the execution.
The assumption im doing here is that the CPU is triple faulting given the fact that the IDT is incomplete (any help with this would be appreciated too). The question is more of what triggers the exception that causes the double fault, triple fault, reset sled in the first place. Im just guessing but, maybe it has to do with incorrect paging or building process (friend link.ld gave us a lot of trouble in the past).
Any light poured over this problem, or pointing out any mess detected in our code would be greatly appreciated. And im sorry if the question was too long or inexact. Remember that we are students indeed (;
Have a great day and stay safe!
P.S. The full code of the bootloader is linked above in the github, in the branch BootTest. I tried to keep the question as short as possible, but under request i can edit the text to include any code fragment you need.
[EDIT] Another fail, maybe not related is that, when i dissasemble the .bin file, it shows everything under .data section.
Try to make a [mcve], there are a lot of files in that repo (some ambiguous: boot.asm vs boot2.asm and not reflecting the code you posted), and a lot of spaghetti coding. Your CPU is clearly triple faulting, bochs is better than qemu at debugging this issues because it can output a very verbose log that includes what lead to the TF. If you blindly copied the code from OSDEV then it won't probably work. I'd double-check the PML4 setup, particularly what mapping are you using? And why are you reloading the same exact GDT as used in PM (i.e. with no L bit set and then executing 64-bit code)?
It is also probably better is you break down your goal: 1) enter PM from boot and check that everything is OK (32-bit code executes, segments are set). 2) enable PAE and check that each page is correctly accessible (access an expected address for page) 3) enable LM and check that 64-bit assembly code is run correctly (including accessing the pages of above). 4) compile the kernel, check that the linker script is working and the accessed addresses are good. Check the kernel is loaded at its right address and execute it.
Thank you so much @MargaretBloom. Give me some time to sort the mess up with the branch, (you are absolutely right about it) and follow your advice on 2nd comment, i'll edit the question. Have a nice day.
|
STACK_EXCHANGE
|
CruiseControl. NET Getting Unused node detected for username
I'm having a bit of a conundrum here.
i'm working with: CruiseControl.NET Server 1.6.7981.1
We've moved offices, and at the same time our SVN repository moved from a generic stack to our clients system that uses LDAP.
Since the change I've been unable to get CruiseControl working with SVN.
Here's what I have in my config:
<project ...
<webURL>...
<workingDirectory>...
...
<sourcecontrol type="svn-change-detection">
<svn-exe>C:\Program Files\Subversion\bin\svn.exe</svn-exe>
<code-root-url>http://blah.blah.blah/svn/project/branches/product-lines/4.3.0</code-root-url>
<tag-root-url>http://blah.blah.blah/svn/project/tag/4.3.0</tag-root-url>
<build-interval>86400</build-interval>
<username>LDAP_DOMAIN\USER</username>
<password>AwesomePassword</password>
</sourcecontrol>
The error I'm getting is:
[CCNet Server:ERROR] Exception: Unused node detected: <username>LDAP_DOMAIN\USER</username>
From examples I've seen that should be perfectly legal... or am I missing something?
Any help would rock.
Your configuration is all wrong. You should review the documentation for the SVN block. Try this.
<sourcecontrol type="svn">
<executable>C:\Program Files\Subversion\bin\svn.exe</executable>
<trunkUrl>http://blah.blah.blah/svn/project/branches/product-lines/4.3.0</trunkUrl>
<username>LDAP_DOMAIN\USER</username>
<password>AwesomePassword</password>
</sourcecontrol>
build-interval: that's part of your project config, not source control.
These were the config settings used in CruiseControl.NET 1.4 and it worked perfectly fine with the version of SVN we were using. Now that we've upgraded SVN, and new servers and cruise control, you're right I might have to look into it. I was hoping it would be backwards compatible.
Those tags were in place for a custom CruiseControlNET C# application that was called through NANT. It seems CCNET can do all this with default configurations without the need for those external tasks. Thanks for the help.
That makes sense, a lot of the CCNET plugins don't work well (or at all) when you upgrade. Glad it worked out for you.
|
STACK_EXCHANGE
|
New Directions IT Staffing
Are you a Data Scientist with a demonstrated background in actuarial modeling who is looking to make a significant career move? If so, we’d like to introduce you to a Fortune 1000 who is a top ranked Forbes Magazine and AM Best top ranked company.
We have been engaged in a national search for an Actuarial Data Scientist who willwork with data professionals and actuaries who are building and evaluating insurance pricing and other enterprise data models. In this role, the Actuarial Data Scientist will:
- Lead predictive modeling projects that involve new rating structures in a SAS, R and Python analytical programming environment.
- Apply predictive modeling techniques to pricing and rating structure data science initiatives that include logistic regression, time series, neural networks, random forest, boosting, text mining, clustering, deep learning, optimization, and others.
- Manipulate large amounts of structured and unstructured data using SQL.
- Communicate findings and trends to key stakeholders across the company.
The appropriate individual will have demonstrated experience in the following:
- Three to five years in an actuarial data science capacity that includes a combination of pricing and rating structures that provides actionable insight to enhance business outcomes.
- Data Science programming and modeling in a SAS, R, Python or other similar environments.
- Predictive modeling that includes pricing and rating structures.
- Predictive modeling techniques that include a combination of logistic regression, time series, neural networks, random forest, boosting, text mining, clustering, deep learning, optimization, etc.
- Manipulation of structured and unstructured data in a SQL environment.
- Full data science lifecycle from problem definition to data exploration, data wrangling, modeling, analysis, and deployment into production.
- Strong communications skill sets that include storytelling, presentations and methods to provide insights to non-technical stakeholders
- Bachelor’s or Master’s degree in Data Science, computer science, operations research, statistics, applied mathematics, or a related quantitative field. PhD or insurance designations such as ACAS desirable.
This is a 100% remote career opportunity.
Are you Ready to move your Career in a New Direction?
Please forward a copy of your resume for us to schedule a time to speak or contact us at (866) 999-8600. We look forward to meeting you.
About Us – New Directions, It’s Right in Our Name
We are an Information Technology & Digital Talent Solutions Firm that furnishes its’ clients with a range of recruiting and staffing services while providing career coaching and job search guidance to the candidates and consultants we work with.
Interested in hearing how we’ve made the hiring and job search process simple?
Contact us at: https://www.newdirectionsstaffing.com/contact-us/.
Apply with Github Apply with Linkedin Apply with Indeed
|
OPCFW_CODE
|
Novel–The Cursed Prince–The Cursed Prince
Chapter 371 – Gewen Is Conflicted macho morning
Ahh.. why does Harlow get immediately after him in countless facets? Not just they appeared similar, but they also propagated exactly the same unfortunate destiny.
“Oh…” Louis let go of his mother’s hand and attended Harlow’s facet. He checked out the sleep child dotingly and sighed. Then he checked approximately his dad and inquired, “So, Harlow’s mommy continues to be not finding their way back?”
He was certain Ellena was innocent and she would prove herself prior to when the crown prince. So, Gewen was not worried.
Mars clenched his fists as he listened to Lily’s very last words and phrases. He grasped what should be taking in Emmelyn’s head.
american sniper the autobiography of the most lethal sniper in u.s. military history
Gewen was positive that Ellena wouldn’t have the center to remove Queen Elara.
Athos pulled Lily into his arms and hugged the girl dotingly. He patted Lily’s again and made an effort to control console her. “It’s acceptable.. Harlow is safe now. Mars is here. He will defend Harlow.”
Nonetheless, he tried to always keep quiet and didn’t say anything. Like Mars stated, they found it necessary to listen to all sides with the history.
She carried on, “I am just unsure whenever they did satisfy. But when she gave birth to Harlow, Emmelyn informed me she terrifying on her behalf existence. She believed the one good reason she was spared from delivery was that she was still pregnant with Harlow. Just after Harlow came into this world, she thought the Prestons would make an attempt to destroy her.”
Having said that, he knew Ellena adored Mars deeply and, being aware of Ellena, he didn’t consider it had been beneath her to try and use her used father’s impact to complete Emmelyn soon after Harlow was given birth to.
Mars saw that Harlow might be his only kid and she would grow up depressed, just as him.
“Certainly.. I understand, High Highness has arrived… but… but Emmelyn…” Lily buried her deal with on the husband’s torso and extended sobbing. Her most ancient kid looked concerned and got to maintain her hand.
“Of course.. I understand, Large Highness is here… but… but Emmelyn…” Lily hidden her confront on the husband’s chest area and persisted sobbing. Her oldest daughter appeared concerned and arrived at maintain her fretting hand.
Gewen tried to tell themselves.
Either Emmelyn murdered his new mother or otherwise, she would die anyway because Mars had not been on this page to guard her.
“She said she got a prepare and it will surely entail Ellena,” reported Lily. “I sent the content discreetly as Emmelyn required. I instructed my servant to fork out a child in the marketplace to email the content to Woman Ellena in the parents’ household.”
Lily cleaned her tears and shook her top of your head, faking a smile. “No, I am okay, sweetie. I am just just distressing in regards to what occured to Harlow’s new mother.”
The prince touched Louis’s shoulder blades and provided the child a smile. “Thanks for caring about Harlow, Louis. You are a fantastic child.”
It’s just that Emmelyn died before that taken place. So, they couldn’t eliminate her.
Gewen attempted to convince themselves.
“Yes.. I realize, Great Highness will be here… but… but Emmelyn…” Lily hidden her confront in her husband’s torso and carried on sobbing. Her earliest child appeared nervous and arrived at hold her hands.
Chapter 371 – Gewen Is Conflicted
Judging out of the king’s sentimental status and ways in which very much he disliked Emmelyn, it wouldn’t be unexpected that Master Jared would get her delivery the moment Harlow was created.
“Indeed.. I know, High Highness has arrived… but… but Emmelyn…” Lily buried her facial area in her husband’s chest area and extended sobbing. Her earliest child appeared concerned and stumbled on carry her hands.
A Defence of the Hessians
Gosh.. he really desired to stab him self regarding his sword for leaving her. No ideas could explain the amount of remorse he was experiencing now.
If his partner was still in existence, Mars could will depend on getting more children with her to keep Harlow company. These days, it was actually simply a far off aspiration now.
Ahh.. why have Harlow acquire after him in so many facets? Not merely they looked equivalent, in addition they discussed a similar unfortunate fate.
But… if Ellena performed structure Emmelyn for getting Mars for herself, he wouldn’t manage to look at his friend much the same way again.
Mars seen that Harlow might be his only youngster and she would mature unhappy, the same as him.
Gosh.. he really wanted to stab him self regarding his sword for leaving behind her. No thoughts could explain the number of be sorry for he was obtaining now.
Gewen was sure Ellena wouldn’t contain the heart to wipe out Princess Elara.
Section 371 – Gewen Is Conflicted
“Without a doubt.. I know, Higher Highness will be here… but… but Emmelyn…” Lily hidden her facial area on her husband’s chest area and ongoing sobbing. Her earliest boy checked concerned and arrived at keep her fretting hand.
Gosh.. he really want to stab themselves with his sword for causing her. No thoughts could summarize how much remorse he was getting now.
She extended, “I am unclear whenever they does meet. But when she gave birth to Harlow, Emmelyn explained she feared for her lifestyle. She realized the only reason she was spared from execution was she was still with child with Harlow. Just after Harlow came into this world, she believed the Prestons would attempt to eliminate her.”
Athos pulled Lily into his forearms and hugged the woman dotingly. He patted Lily’s again and attempted to unit her. “It’s okay.. Harlow remains safe and secure now. Mars is here. He will protect Harlow.”
Either Emmelyn destroyed his new mother or maybe not, she would pass away anyway because Mars was not in this article to secure her.
“Emmelyn explained… if a little something occurred to her, I must take Harlow in and secure her and soon you give back,” explained Lily. She then begun sobbing uncontrollably. “Just after she spoke in my opinion… she went along to sleep. I moved where you can change and then determine my kids. By the time I came back for the Grey Tower, she was already old.”
Novel–The Cursed Prince–The Cursed Prince
|
OPCFW_CODE
|
It was rather a very intense Google journey to find out all the details on how to emulate Junos. Since i intend to learn juniper, i needed a platform to work on. after two days of research and work, i managed to a results.
There are various things required to make things work. I would list them down here so they are easy to find. VMware Player, GNS3, Cisco router IOS, and VMware Olive (Google is your friend). Once you have all these, you are ready to start!
Running the VMware machine will be an easy task, but connecting the VMware Olive with Cisco in GNS3 is the one requiring some work. But with my guide, it should be as easy as 1,2 and 3.
After installing VMware Player, check for adapter settings in windows.
By default VMware player will install to VMware virtual Ethernet adapters, i’m not sure what are their numbers. but for my case, they were vmnet1 and vmnet8. These are significant to know how to connect VMware machine to Cisco router in GNS3.
Open the .vmx file in notepad. Here we can edit the fields in order to make VMware Olive machine use the virtual Ethernet adapters in windows.
The Olive VMware has three network interfaces, two are bridged and first one is in “costum” we change the adapter to the one to fit the Ethernet adapter in our network devices (from the first figure). I already highlighted it in red. Ethernet 0 will be reflected as interface em0 in Junos. ethernet1, and ethernet2 will be bridge on the virtual Interface, so you can connect to other Olive Machines to ethernet1, ethernet2 (em1, em2). my assumption is, if you want to connect say Olive1 and Olive 3 using em2 then you change ethernet2 in vmx file of both olive 1 and 3 to a bridge mode with a common adapter.
That is the topology i created for the simulation. basic two Juniper routers connected to a Cisco router. and the two Juniper routers are connected as well (virtually in VMware). It was tested, and pings were working.
In GNS3, add the VMware as a Cloud, of course the cloud will be not associated with the VMware Olive till you select the adapter that you set up in the vmx file. In a screen shot, you will see that i have Chosen Vmnet1 for this particular Olive.
Last step, would be to do the appropriate configurations in the Olives, and Cisco Router, here is the screenshot of the sample configuration i used to ping.
Don’t forget to add the following before you can commit any configuration into Juniper Router
set system root-authentication plain-text-password
Cisco’s Configuration as simple:
ip address 192.168.1.2 255.255.255.252
ip address 192.168.1.10 255.255.255.252
|
OPCFW_CODE
|
/*
* Copyright (c) 2014-2015, Kathy Feller.
*/
#include "calendar.h"
/* private variables */
/* declare private functions */
TimeDay getNumDaysForCurrentYear(Time time);
/* private functions */
// get number of days in this year
TimeDay getNumDaysForCurrentYear(Time time) {
return getNumDays(getStartOfDay(time) - getStartOfYear(time));
}
/* public functions */
// get calendar year
Year getYear(Time time) {
if (time < EPOCH)
return UNDEFINED;
return (Year)(getNumYears(time) + 1);
}
// get calendar day
Day getDay(Time time) {
if (time < EPOCH)
return UNDEFINED;
return (Day)(getNumDays(time) + 1);
}
// get calendar day of year
Day getDayOfYear(Time time) {
if (time < EPOCH)
return UNDEFINED;
return (Day)(getNumDaysForCurrentYear(time) + 1);
}
// get (and initialize) new date
CalendarDate getDate(Time time) {
CalendarDate date;
if (time < EPOCH) {
date.time = UNDEFINED;
return date;
}
date.time = time;
date.d = getDay(time);
date.y = getYear(time);
date.moy = 0; // TBD
date.dom = 0; // TBD
date.doy = getDayOfYear(time);
date.numY = getNumYears(time);
date.numM = 0; // TBD
date.numD = getNumDays(time);
date.numMpy = 0; // TBD
date.numDpy = getNumDaysForCurrentYear(time);
date.numDpm = 0; // TBD
return date;
}
// get (and initialize) new calendar year, given the time
CalendarYear getCalendarYear(Time time) {
CalendarYear calYear;
if (time < EPOCH) {
calYear.year = UNDEFINED;
return calYear;
}
calYear.year = getYear(time);
calYear.spring = getSpringOfYear(time);
calYear.first = getDay(getStartOfYear(time));
calYear.last = getDay(getStartOfDay(getSpringOfYear(time) + getTropicalYear() - 1));
calYear.length = (TimeDay)(calYear.last - calYear.first) + 1;
return calYear;
}
// get (and initialize) new calendar year, given the calendar year
CalendarYear getCalendarYearForYear(Year year) {
if (year < FIRST_VALID_YEAR) {
CalendarYear calYear;
calYear.year = UNDEFINED;
return calYear;
}
// since each year begins at spring
Time time = (year - 1) * getTropicalYear();
return getCalendarYear(time);
}
|
STACK_EDU
|
import { fromEvent, merge } from 'rxjs';
import { tap } from 'rxjs/operators';
import { StorageUtil } from './lib/storage';
import { OptionStorageModel } from './models/options-storage.model';
export class CgPopup {
upOpenLinkElement = document.querySelector<HTMLInputElement>('#upopenlink');
forceOverIframe = document.querySelector<HTMLInputElement>('#forceoveriframe');
async run() {
await this.checkForStorageOptions();
merge(fromEvent(this.upOpenLinkElement, 'click'), fromEvent(this.forceOverIframe, 'click'))
.pipe(
tap((tapitem) => {
console.log(tapitem);
})
)
.subscribe((ev) => {
console.log('Combined checkbox event', {
UpOpenLink: this.upOpenLinkElement.checked,
ForceOverIFrame: this.forceOverIframe.checked,
});
chrome.storage.sync.set({
UpOpenLink: this.upOpenLinkElement.checked,
ForceOverIFrame: this.forceOverIframe.checked,
} as OptionStorageModel);
});
}
async checkForStorageOptions() {
//set default value
this.upOpenLinkElement.checked = await StorageUtil.getSyncValue<boolean>('UpOpenLink');
if (this.upOpenLinkElement.checked == undefined) {
this.upOpenLinkElement.checked = true;
chrome.storage.sync.set({ UpOpenLink: this.upOpenLinkElement.checked });
}
//set default value
this.forceOverIframe.checked = await StorageUtil.getSyncValue<boolean>('ForceOverIFrame');
if (this.forceOverIframe.checked == undefined) {
this.forceOverIframe.checked = true;
chrome.storage.sync.set({ UpOpenLink: true });
}
console.log('this is what i get UpOpenLink', this.upOpenLinkElement.checked);
console.log('this is what i get ForceOverIFrame', this.forceOverIframe.checked);
}
}
let options = new CgPopup();
options
.run()
.then(() => {
console.log('Popup run completed');
})
.catch((err) => {
console.log('Popup run with error:', err);
});
|
STACK_EDU
|
The influence of the SDK for iOS on the SAP and iOS community
On March 16th, we celebrated our annual VNSG developers day. A day fully dedicated to developers and developer topic. We had magnificent content, both from the hearts of SAP as well as the community mostly accompanied by nice demos showcasing SAP’s latest and greatest.
One of the nicer demos was presented by Twan, who took a photo of the audience and used Machine Learning mechanisms to recognise faces and perform an age, sentiment, and gender analysis on the recognised faces.
The results struck me in a way. Of course we already knew that the majority is male, sentiment on a day like this is positive, and the age analysis looked quite flattering as well. But when I did a personal analysis on the faces without machine learning, it struck me that the age of developers in the (Dutch) SAP community is steadily increasing.
The day before the VNSG Developer day, I attended a lunch organised by the Appsterdam meetup group. I was really curious about this meetup group, their topics and the people behind this community. They happened to be able to pass me a free ticket to the AppDevCon conference that would happen that same Friday, just after the VNSG Developer Day. The AppDevCon is a conference specifically for app developers on both the iOS as well as Android platform.
While I felt like one of the youngest when I was on the VNSG Developer day, I felt like one of the eldest when I was at AppDevCon.
When I was at the AppDevCon, I took the opportunity to talk to some of the other visitors to see if they had ever built application on top of enterprise back-ends such as ERP or CRM systems and whether they had ever heard of SAP.
I absolutely hit a pain point there. Most of the response had to do with headaches around message brokers and integration platforms. OData and Gateway are technologies they had never ever heard of.
When I tried to explain to them what it was and how SAP currently does integration, the discussion quite quickly went into the direction on how these things could be done using the shiny popular frameworks such as React and Angular or their native counterparts such as React Native and Nativescript. When it came down to OData and integration protocols, I was quickly pointed in the direction of GraphQL or got the counter question “what’s wrong with regular REST?”
It became clear to me that the intention of the participants of the conference isn’t really the integration to the back-end. That part is definitely not sexy. What’s more important is what the app looks like and how to build it with as little effort as possible.
I have tried to explain about the Fiori design language to some of the attendees and have also explained how this would be integrated into SAP Cloud Platform SDK for iOS. That its objective was to increase developer productivity and contribute to a consistent user interface. Unfortunately this didn’t hit a sweet spot as well.
Although the people I talked to could imagine that this would be beneficial if you would launch an array of products in a large company, this was definitely not their focus area. Most of the times they were hired for an external facing, mostly consumer type app or a one off app an in those cases they just found the Fiori design language too rigid.
Especially from a design point of view, they wanted to have more freedom to adjust the app to the overall experience of the user in that particular app, without caring to much about the world outside of that app. It also seems that the design language they were talking about, had more to do with animations rather than consistency. They believe that with those little extra’s they would be better able to capture the user’s emotion than with a consistent user interface.
SAP Mobile Services
When I spoke about the mobile services on the SAP Cloud Platform and was explaining about the features of the cloud platform in combination with mobile services, at a certain point they understood what I was trying to say and they started comparing the SAP Cloud Platform and mobile services with Google’s Firebase, which admittedly has quite an overlap in functionality.
I was explaining that emphasis would of course be on back-end connectivity and various business services such as analysis, integration with IoT, enterprise social and others. They mentioned that in most cases, they weren’t tasked to do these kinds of things, but these things were taken care of by developers in the back-end teams. Although none of the folks I spoke to knew about the SAP Cloud Platform, they would consider looking at it when they would be asked to take care of similar back-end tasks. There! Mission accomplished, finally something I could at least trigger their interest with, although not to the extend that I hoped for.
SAP Cloud Platform SDK for iOS
During various sessions we held at the VNSG special interest group meetings, but also at the developer day, we noticed a vast interest in the SAP Cloud Platform SDK for iOS. One of the reasons that I was at the AppDevCon was because I would be interested to know whether (non-SAP) iOS app developers would be interested in the SDK as well.
I’m afraid that the SAP Cloud Platform’s Mobile Services is not on their radar screen, and it is of course to early to ask about the mobile SDK (thanks for your highlighting this, Pierre). However, there is a good interest in Google’s Firebase, and I believe that SAP Cloud Platform SDK for iOS could be the Firebase for enterprise mobile developers. There is quite an overlap in functionality, and consumer-type services such as ad-provisioning are replaced with analytics and integration services, making the SAP Cloud Platform a good contender to take the place of mobile platform of choice for mobile enterprise app developers.
It seems that Firebase is setting a de-facto standard in this area though, and it would be wise for product teams behind the SAP Cloud Platform SDK to keep a close eye on what is happening in this field. Especially the continuous client synchronisation is praised by many, leading to an always synchronised offline version of the database, without any additional developer effort.
Another part where the SAP developers may have to take a peek into are the number of Cocaopods the SDK consists of. Currently the SAP Cloud Platform SDK for iOS has three SDK files, while the Firebase SDK can be included in an app on a much more granular level, which may lead to less bloated apps. Note: Swift apps are currently a bit on the heavy side anyway, and the SDK doesn’t make that much of a difference. But it is my expectation that Apple will be able to improve this as well. To make sure that the SDK doesn’t become the “weakest link”, it would be nice to more granularly select the SDK parts that are going to be included in the app.
The Firebase SDK allows developers to configure the framework using plist files. The SAP Cloud Platform SDK for iOS only allows developers to configure the framework in code. It is my personal opinion that the configuration through plist is quite elegant and it would be nice to offer this as a configuration option as well.
SAP developers vs iOS developers
One of the questions I have been asking myself is who will up the SAP Cloud Platform SDK for iOS. Will it be the SAP developers learning Swift and Xcode, or will it be iOS developers trying to understand how SAP, the SAP Cloud Platform and the iOS SDK will fit into their tool-chain.
On the other hand, it may also be just a luxury problem when there are so many modern development environments and languages to choose from. And the front-runners will always be looking beyond what they know, and will be happy to keep up and learn new things. I do expect the SDK for iOS to be picked up by the same people that were also the first to pick-up XSA and UI5. For these polyglots, another language will be just another challenge, if at all. And SAP has done a tremendous job to make this experience as painless as possible. Developers already familiar with the UI5 tooling will feel quite at home when they understand that the Assistant is very similar to the WebIDE templates, that the SAP Fiori for iOS Mentor app is quite similar to UI5 Explored, that the SDK is quite similar to the KAPSEL SDK, and that the SDK is very well documented, just like the UI5 SDK.
The question remains whether iOS developers will see the value of the SAP Cloud Platform in combination with the SDK for iOS to simplify the connection to the back-end and will they accept that the SDK will be working slightly different from what they are currently used to (e.g. Firebase)?
Ultimately, it would be great if iOS developers would see the SAP Cloud Platform and SDK for iOS as part of their tool chain. Perhaps the SDK could be for enterprise what Firebase currently is for consumer-type applications. The SDK may need to offer a slightly more similar feel to Firebase though, which means it would have to break with the patterns that it has in common with the KAPSEL SDK.
There’s a massive difference in the iOS and SAP community. Not only in the tool chains and frameworks of choice, but even in demographics. It is a question whether these communities will grow together organically, and the SAP community could certainly use some young blood.
To get iOS developers on board, it may be necessary to provide an enterprise toolchain that is similar to the tools they are already using. Equally important are the education and continuous developer relationship building within this community.
Looking at the interest in the SDK for iOS at past events, the SAP community is probably going to embrace the SDK for iOS without much effort. People that were early adopters of technology such as XSA and UI5 are likely to be the early adopters of the SDK for iOS again.
There is one thing that the iOS and SAP community have in common. They are both spoiled with choice in the area of modern development environments, frameworks and languages. With the SDK for iOS in place, there will even be more choice for enterprise app development. It will be very interesting to see how these communities adopt this new technology, evolve, and hopefully grow towards each other.
Let me know what you think?
|
OPCFW_CODE
|
//
// XCTestCase+Utilities.swift
//
import Foundation
import XCTest
public extension XCTestCase {
/// Performs `activity` with `continueAfterFailure = false`, then restores the
/// prior value of `continueAfterFailure`.
///
/// This exists to let individual tests--and individual blocks therein--express
/// their own preference vis-a-vis continue-after-failure; without this, either
/// (a) each test-case subclass sets a common policy for all tests, (b) each
/// method contains its own set-and-reset boilerplate, (c) you overide setup/teardown
/// to reset the state to the common policy for that subclass, or (d) you risk
/// unexpected behavior due to individual tests manipulating this shared state.
///
/// In the long run a richer `XCTTestCase` design that derived "continue after
/// failure" by examining a stack of, say, `XCTestExecutionPreference` might
/// be preferable, but for now this is a reasonable compromise.
///
/// - seealso: `continuingOnError(_:)`, which has the opposite semantic.
///
@nonobjc
@inlinable
func haltingOnFirstError(_ activity: () -> Void) {
self.with(
continueAfterFailure: false,
activity: activity
)
}
/// Performs `activity` with `continueAfterFailure = true`, then restores the
/// prior value of `continueAfterFailure`.
///
/// This exists to let individual tests--and individual blocks therein--express
/// their own preference vis-a-vis continue-after-failure; without this, either
/// (a) each test-case subclass sets a common policy for all tests, (b) each
/// method contains its own set-and-reset boilerplate, (c) you overide setup/teardown
/// to reset the state to the common policy for that subclass, or (d) you risk
/// unexpected behavior due to individual tests manipulating this shared state.
///
/// In the long run a richer `XCTTestCase` design that derived "continue after
/// failure" by examining a stack of, say, `XCTestExecutionPreference` might
/// be preferable, but for now this is a reasonable compromise.
///
/// - seealso: `haltingOnFirstError(_:)`, which has the opposite semantic.
///
@nonobjc
@inlinable
func continuingOnError(_ activity: () -> Void) {
self.with(
continueAfterFailure: true,
activity: activity
)
}
/// Helper method for `continuingOnError(_:)` and `haltingOnFirstError(_:)`.
@nonobjc
@inlinable
internal func with(continueAfterFailure: Bool, activity: () -> Void) {
let previous = self.continueAfterFailure
defer { self.continueAfterFailure = previous }
self.continueAfterFailure = continueAfterFailure
activity()
}
}
|
STACK_EDU
|
CREATE OR REPLACE PROCEDURE display_call_stack AS BEGIN DBMS_OUTPUT.put_line('***** Call Stack Start *****'); DBMS_OUTPUT.put_line(DBMS_UTILITY.format_call_stack); DBMS_OUTPUT.put_line('***** Call Stack End *****'); END; / -- Test package to show a nested call. Prior to Oracle Database 10g, one could obtain this information only by allowing the exception to go unhandled. The "ORA-06512" error is not included, but this is implied because it is a backtrace message. Show 3 replies 1. navigate here
This has been the cause of many a frustration for developers. This is quite useful when troubleshooting. Just the Line Number, Please In a real-world application, the error backtrace could be very long. Depth Number --------- --------- --------- --------- --------- -------------------- 5 0 1 __anonymous_block 4 1 5 TEST TEST_PKG.PROC_1 3 1 10 TEST TEST_PKG.PROC_2 2 1 15 TEST TEST_PKG.PROC_3 1 0 13 TEST
Recognizing that I will be needing to parse the contents of a string based on various delimiters, I define a number of constants to hold these delimiter values. However, the good thing about PLSQL_LINE, it provides the number without the need of any extraction, or string parsing. Error Stack Exceptions are often handled by exception handlers and re-raised. Add custom redirect on SPEAK logout What's difference between these two sentences?
SQL> As you can see, the output from the DBMS_UTILITY.FORMAT_CALL_STACK function is rather ugly and we have no control over it, other than to manually parse it. The function DBMS_UTILITY.FORMAT_ERROR_BACKTRACE is a great improvement to PL/SQL and adds a much needed functionality. Now that we have the line number, we can zoom right in on the problem code and fix it. Oracle Error Stack Trace Senior MemberAccount Moderator Of course, the first question should be why do you use sqlerrm? "When others then dbms_output.put_line(sqlerrm)"?
And now when we execute our TestProc procedure, the ORA-06512 error has been resolved. $$plsql_line The question is how to find that line number. BACKTRACE_LINE : Line number in the subprogram of the current call. SQL> CREATE OR REPLACE PROCEDURE p3 2 IS 3 BEGIN 4 DBMS_OUTPUT.put_line ('in p3, calling p2'); 5 p2; 6 END; 7 / Procedure created.
But don't you think this is tedious work to do?? Andy Todd | 25 Jul 2006 9:47 pm I've always found the line numbers provided by the PL/SQL parser to be a little misleading, whenever I've tried to look them up How To Find Which Line Error Was Raised In Oracle This way you have (and can log) that critical line number, even if the exception is re-raised further up in the stack. Pl/sql Line Number This procedure was successfully created.
Let's call p3: SQL> set serveroutput on SQL> BEGIN 2 DBMS_OUTPUT.put_line ('calling p3'); 3 p3; 4 END; 5 / calling p3 in p3, calling p2 in p2 calling p1 in p1, check over here The developer of the application might even like to display that critical information to the users so that they can immediately and accurately report the problem to the support staff. Reading the stack from top to bottom, note that the exact points at which the exceptions were encountered are preserved. SQL> With the exception of some minor formatting issues, this output is fine and will probably be OK for most situations. What Are The Methods There In Save Exceptions In Oracle
SQL> BEGIN 2 DBMS_OUTPUT.put_line ('calling p3'); 3 p3; 4 END; 5 / calling p3 in p3, calling p2 in p2 calling p1 in p1, raising error Error stack from p1: ORA-06512: This new function returns a formatted string that displays a stack of programs and line numbers leading back to the line on which the error was originally raised. ERROR_DEPTH : The number of errors on the error stack. http://appaliciousapp.com/in-oracle/oracle-pl-sql-get-error-line-number.php The UTL_CALL_STACK package contains APIs to display the backtrace.
Report message to a moderator Re: How to get Error Line Number in PL/SQL in Exception Block [message #325185 is a reply to message #325182] Thu, 05 June Pl Sql Call Stack I don't use it everywhere, just in spots where it would be even more tedious to track down bugs without it. SQL> There is very little you can do with the backtrace, other than reordering it.
can phone services be affected by ddos attacks? Now, Let's call p3: SQL> BEGIN 2 DBMS_OUTPUT.put_line ('calling p3'); 3 p3; 4 END; 5 / BEGIN * ERROR at line 1: ORA-06502: PL/SQL: numeric or value error ORA-06512: at "HR.P1", The UTL_CALL_STACK package contains APIs to display the contents of the error stack. Dbms_utility.format_call_stack Example Thid will not provide correct line numbers.
SQL> You now have programmatic control to interrogate and display the call stack if you need to. These will be captured and logged by the business-rule packages that process data and need to write to application log files. CURRENT_EDITION : The edition of the subprogram associated with the current call. weblink DBMS_UTILITY.FORMAT_ERROR_BACKTRACE This procedure displays the call stack at the point where an exception was raised, even if the procedure is called from an exception handler in an outer scope.
I want to... Mind you, I haven't looked into this seriously since Oracle 8i so it may have changed in more recent versions of the database. SET SERVEROUTPUT ON EXEC test_pkg.proc_1; ***** Error Stack Start ***** Depth Error Error . But I would like to add a bit about the difference between them: Predefined Inquiry Directives $$PLSQL_LINE & $$PLSQL_UNIT PLSQL_LINE predefined inquiry directive is a PLS_INTEGER literal value indicating the line
Notice the unhandled VALUE_ERROR exception raised in p1. I will continue to use my_putline , since the backtrace could be very long if the call stack is deep (and your program names are long). Check DBMS_UTILITY.FORMAT_ERROR_BACKTRACE. Report message to a moderator Re: How to get Error Line Number in PL/SQL in Exception Block [message #325198 is a reply to message #325195] Thu, 05 June
BACKTRACE_UNIT : Subprogram name associated with the current call. Regards Michel [Updated on: Thu, 05 June 2008 04:30]Report message to a moderator Re: How to get Error Line Number in PL/SQL in Exception Block [message #325182 is
© Copyright 2017 appaliciousapp.com. All rights reserved.
|
OPCFW_CODE
|
Novel–The Legend of Futian–The Legend of Futian
Chapter 2421 – Apostle automatic punish
Boom… The four excellent seniors stepped in front all at once. A horrifying dominion in the Starry Wonderful Pathway appeared around them. Celestial actors surrounded them and impeded out of the skies, as well as sunlight and intercepted the Sword Will of Lightweight from Blind Chen.
Then, Sightless Chen endured up and stated, “Chen Yi, get in.”
Then, Blind Chen withstood up and explained, “Chen Yi, get in.”
But at the same time, Blind Chen made. His back was struggling with where Chen Yi claimed he was. Blazing will of vibrant erupted from his system, blinding everybody who considered it directly. The light bombarded the s.p.a.ce and obstructed off Chen Yi from him. Formless pulses erupted through the voids being the will of light-weight collided along with the will of sword from Patriarch Lin.
But in this gentle, they found some view that made their hearts and minds lb. Those vision covered almost endless lights it was subsequently Blind Chen’s eye.
Boom… The four excellent elders stepped frontward while doing so. A alarming dominion of your Starry Great Way sprang out around them. Celestial superstars surrounded them and clogged out your skies, as well as the sunlight and intercepted the Sword Will of Light-weight from Sightless Chen.
Illusion (Lan Lin)
Every thing before them proven how the legends ended up all true. The Spot of Lightweight was indeed the spot that the Vibrant Temple was.
A Divine Sunshine Diagram sprang out behind the excellent Elder of your Yu Clan and introduced toward Blind Chen, clas.h.i.+ng into his Sword of Light. It had taken a synchronised episode from the four strongest cultivators while doing so to stop the might of Sightless Chen’s Pathway.
Bling Chen walked forwards, promoting himself while using cane in their hand. He emerged until the is always of the Dazzling Temple and knelt on a lawn just as before. He kowtowed for the temple with excellent piety, just as if he was one of the most pious believer in the Shiny Temple, which designed every person all the more suspicious of his accurate ident.i.ty. Potentially Sightless Chen was in connection with the Bright Temple.
Blind Chen’s tattered outfits fluttered during the air flow as he stood atop the rubbles with the unyielding concept. The cane on his hands obtained morphed into a scepter of lightweight exactly like the scepter at the disposal of the guardians who withstood before the Vibrant Temple.
Hum! Just then, various extremely powerful auras erupted inside of the s.p.a.ce, the cultivators from all four of the wonderful makes intervened, as well as the four elders were definitely the first one to strike.
Hum! A furious roar surged via the void as a formless sword pierced through s.p.a.ce, stabbing toward their sight within a part of another.
The cultivators started off taking walks frontward one after a different. The cultivators’ eyes from each of the factors began receiving heated up as his or her gazes slowly filled up with greed and need. For many years, they had been on defend on the Location of LIght. Now, they finally saw the divine relic.
“Stop him,” Patriarch Lin said by having an ice cubes-frosty voice. Quickly, cultivators from the many four good factors assaulted. They had already compensated a large rate to acquire below along with suffered excellent cutbacks, such as the fatality of several of their own clan participants. Now that they had finally reached the divine palace, how could they simply let Chen Yi take pleasure in the fresh fruits of their sacrifices alone?
Blind Chen opened up his eye!
Ye Futian appeared in front. The divine temple was incredibly grandiose and impressive. It appeared such as an tremendous castle, stretching out into the skies to s.h.i.+ne down an infinite assortment of mild from up high inside the surroundings.
Then, Blind Chen withstood up and reported, “Chen Yi, get in.”
The Q Continuum_ Q-Space
Even though Sightless Chen couldn’t see, the 4 cultivators’ every relocate made an appearance as part of his belief. A more magnificent lightweight erupted from him. Easily, a dominion of lighting appeared and engulfed the skies. In this dominion, the 4 elders squinted just as if they could not any longer see. Below, there were only lighting it turned out actually just like whatever they got encountered inside Divine Matrix of Mild.
But inside of this gentle, they noticed a couple eye that built their hearts and minds pound. Individuals eye contained endless lighting it turned out Sightless Chen’s view.
He opened up his eye with gentle!
Hum! Just then, many extremely strong auras erupted in the s.p.a.ce, the cultivators from all four of the good factors intervened, as well as four seniors were the first one to strike.
Every little thing before them demonstrated the legends had been all real. The Location of Light was indeed in which the Dazzling Temple was.
Then, Sightless Chen withstood up and mentioned, “Chen Yi, get in.”
Most likely all the tricks installed inside the Temple of Lightweight.
“Sword of Light-weight.” The expressions on the four best cultivators altered. In mere a quick, most of their cultivators passed away. That they had all been killed by Blind Chen. Quite a few was Renhuang amount cultivators. This created the remainder of them be afraid instead of dare to advance ahead.
Maybe all the tips laid inside of the Temple of Lighting.
“Yes.” Chen Yi stepped frontward and went towards the divine palace.
Boom… The 4 fantastic senior citizens stepped ahead while doing so. A terrifying dominion on the Starry Excellent Route came out around them. Celestial actors surrounded them and blocked out the skies, and the sun and intercepted the Sword Will of Gentle from Blind Chen.
They didn’t think Blind Chen’s prophecy would come a fact simply by taking walks throughout the Murderous Light-weight Matrix. No-one believed that it could be so easy to get rid of via the murderous matrix. Maybe it was actually mainly because they believed absolutely nothing about lightweight, but Ye Futian could see right through it.
At this time, Sightless Chen finally unleashed his breathtaking strength. He ended up being a cultivator who had beat the Divine Tribulation on the Fantastic Path likewise. His ability amount was definitely the same as the 4 fantastic elders.
Section 2421: Apostle
Hum! Just then, numerous extremely powerful auras erupted within the s.p.a.ce, the cultivators from all four from the wonderful factors intervened, as well as four seniors ended up the first to episode.
Anything before them proven how the stories have been all genuine. The Spot of Gentle was indeed the place that the Shiny Temple was.
The cultivators commenced wandering frontward one particular after an additional. The cultivators’ sight from every single causes commenced obtaining warmed up as their gazes slowly packed with greed and drive. For years, they had been on defense for the Spot of Gentle. Now, they finally found the divine relic.
At this moment, Sightless Chen finally unleashed his impressive strength. He turned into a cultivator who experienced overcome the Divine Tribulation of your Terrific Way on top of that. His electrical power stage was definitely much like the four wonderful elders.
Novel–The Legend of Futian–The Legend of Futian
|
OPCFW_CODE
|
WhyCanInotThinkOfAName? 2021-03-16 19:02 (Edited)
hey! I'm new. I just found this programming language and I really like it. Mainly, because you can post your own programs. i made an account and i just wanted to say hello. If you have any beginner advice on programming, i would aprecciate it :)
nathanielbabiak 2021-03-16 22:53 (Edited)
If you're new to programming in general, try to focus on "sequential program flow" (one line, one instruction, and lines just execute one-at-a-time). You can write simple programs with
CLS, use these essential IO commands to test out the code itself.
After that, you can add program flow control with loops and
IF THEN ELSE (I'd stay away from
RETURN.) Many of the text adventure games available on this site don't need much else, actually.
Once you're comfortable there, the sky's the limit - enjoy!
All of these
COMMANDS are available in the manual, here:
WhyCanInotThinkOfAName? 2021-03-16 23:50 (Edited)
i acctually know some basic. so i familiar with print, input, dim, while-wend, for-next, cls, if-then-else, etc... but not the graphics commands. Thank you anyway!
WhyCanInotThinkOfAName? 2021-03-16 23:53
i acctually like gosub just because of not having to right GLOBAL all the time lol
was8bit 2021-03-17 05:56
was8bit 2021-03-17 05:59
Graphics on lowres NX are designed as a "Tile-based" format, with each tile being 8x8 pixels in size... lowres call these CHARACTERS... think of a font set containing the images of each letter.. unlike modern font which are sizeable, lowres graphic blocks are fixed to 8x8 in size..
was8bit 2021-03-17 06:03 (Edited)
When looking at your code, open up the editor menu at the top right... and then select GFX Designer... (try this with an existing game you have downloaded)
You are now looking at the heart of graphics in lowres... now notice the 3 tabs on the lower right, it is already selected on Graphics, then it has Palette colors, and then Background file editor... for now, lets look at the graphics...
was8bit 2021-03-17 06:09 (Edited)
You will notice at the bottom of the screen it shows "1/4" this is means you are viewing page 1 out of 4 pages of graphics... use the up/down arrow buttons to switch the pages...
Now goto page one, use your finger to touch a small graphic.... you will notice 2 things...
1) to the left of the "1/4" you will see a #001 or similar... this is the reference# of the small graphic you touched... officially this is called the Character#
2) in the upper left you will see a big version of the small graphic... this is the edit screen where you can edit by touch each pixel in the 8x8 sized character graphic...
The file buttons on the left save edits... exit w/o saving to keep the original
was8bit 2021-03-17 06:17 (Edited)
To use your graphics, you have two options....
1) place characters onto one of the two backgrounds... directly with code is..
CELL,X,Y,Char#... example... CELL 3,3,1 will place character#1 into a background cell coordinates 3,3... 0,0 coordinates are at the very top left of the screen... the screen is a grid of 20x16 cells in size, each size being exactly 8x8 pixels in size...
2) create a sprite, and give it a character for its graphics.... sprites float above both backgrounds, and move at the pixel level.. with code....
SPRITE Spr#,X,Y,Char#... example, SPRITE 0,20,20,1 will assign character graphic #1 to Sprite#0, at pixel location 20,20, pixel coordinate 0,0 is at the top left of the screen, and there are 20x8 by 16x8 pixels on the screen
was8bit 2021-03-17 06:21 (Edited)
The two backgrounds are layered... with BG 1 being below BG 0 (kind of backwards for me, but BG 0 is on top of BG 1)
All sprites are above backgrounds, normally by default (Priority settings can change that, but are tricky) ... and Sprite#0 is always layered above Sprite#1, etc. with Sprite#63 layered below all other sprites
was8bit 2021-03-17 06:24
PRINT kinda automatically takes charge of the screen, and acts like the old typewriters, automatically printing stuff on the screen...
To take charge of HOW your info is placed on the screen, there are two main ways...
1) TEXT 3,3,"HELLO" places the "H" in cell 3,3, the "E" in cell 4,3, etc...
2) NUMBER 3,3,SCORE,4 will place the value of SCORE, lets say SCORE=24..
Starting at cell 3,3... you will see 0024 .....
was8bit 2021-03-17 06:28
Do not hesitate to ask any questions... or even attach your code for help... there are plenty of people here that are happy to help :)
was8bit 2021-03-17 06:35
One more thought.... the graphic images (characters) are designed with generic colors... color #0 (clear, blank, or see thru), and colors #1,2,3...
You ASSIGN, or PAINT by number, using PALETTES, paletes #0 to 7, allow you to create a 3 set of colors ... and at any time you may change the palette set... so all characters may be assigned any one palette set at any time...
So, while a graphic can only have 3 colors at a time, you may change which palette set to display it with...
was8bit 2021-03-17 06:37
For characters on the background...
... places char#12 at cell coordinates 3,3, and paints it with color set (pallette) #1
SPRITE 0 PAL 1
Assignes Char#12 to sprite#0
sprite#0 is painted with palette#1
was8bit 2021-03-17 06:40
You may reassign different characters to different (or same) places on the background cells, and reassign different characters to different (or same) sprites , and repaint any of these at anytime.... in real time during the game....
was8bit 2021-03-17 06:42 (Edited)
One recommendation for timing your games is this... use one main DO LOOP like this..
... game code....
This will set your game to lowres standard of 60 graphic frames per second...
was8bit 2021-03-17 06:53
A tip... to see a variable's value in real time w/o messing up your screen graphics, use TRACE variable_name, then when running your game, tap the upper right corner and select DEBUG mode on... now TRACE will let you see your variable value in real game time without messing up any graphics :)
WhyCanInotThinkOfAName? 2021-03-17 17:47 (Edited)
Wow! thank for all the info! i cant belive you took the time to write that all out! thank you so much! I'll check out everything you said. by the way? is there anyway to change the size of text in the output?
CubicleHead 2021-03-17 18:01
Hiiiiiiiiiiiiiiiiiiiiiii! (≧∇≦) I'm excited too see new people here :D
Idk much about programming in here, I'm used too old lowres, but I think you can do that somehow by changing the font in gfx designer? (・・?)
was8bit 2021-03-17 18:18
Glad to be helpfull... and as far as using commands like TEXT and PRINT, they are restricted to a fixed 8x8 size... you can draw your own larger text and use them manually...
Try this... goto page 4 of the characters... select FILE button, then FONT button, then select NORMAL... now to save, select FILE, SAVE
Now you can actually design your own font set :) ... just be sure to save it... your new font set will work with TEXT and PRINT :D
was8bit 2021-03-17 18:25
You can even change font sets, in mid game... here is my example...
It uses fonts by desbyc
was8bit 2021-03-17 18:31
Page 4 is usually left blank, as is file#0... so that NX will automatically load up a default font set, secretly, so you can easily use PRINT TEXT right away... i say secretly because NX doesnt actually "show" you the default font, it just quietly uses it...
... but as soon as you draw on page 4, you will quickly learn that this affects your text....
was8bit 2021-03-17 18:33 (Edited)
I have a big text font game, but placing them is done manually, you cannot use PRINT or TEXT with double sized fonts...
BTW, I let people "remix" anything of mine, where you can edit or add stuff and then can post and share... just add "REMIX" to the title, so everyone knows you remixed one of my stuff, but also added or edited some of your stuff....
So, if you would like to try editing a new design on my big font set, just to experiment, go ahead and create a new font for it, and post it... as the game might be mine but the font would be yours, so just add "remixed" to the title and let others know that the font is yours :)
... remixing can be a fun way to learn the language :)
|
OPCFW_CODE
|
Java 1.8 ASM ClassReader failed to parse class file - probably due to a new Java class file version that isn't supported yet
My web application runs fine on JDK 1.7 but crashes on 1.8 with the following exception (during application server startup with Jetty 8). I am using Spring version: 3.2.5.RELEASE.
Exception:
org.springframework.core.NestedIOException: ASM ClassReader failed to parse class file - probably due to a new Java class file version that isn't supported yet
I assume that problem occurs because of spring and "asm.jar" library on which it depends.
How do I resolve this?
Are you compiling your webapp as Java 8 or Java 7? If 8, it should be possible to compile your classes targeting Java 7 but still run it under Java 8.
If you want to target Java8 you'll need Spring 4
It compiles to 1.7 but there is no support for java 8 features. So using jdk 8 in this case doesen't make sence
As @prunge and @Pablo Lozano stated, you need Spring 4 if you want compile code to Java 8 (--target 1.8), but you can still run apps on Java 8 compiled to Java 7 if you run on Spring 3.2.X.
Check out
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/new-in-4.0.html
Note that the Java 8 bytecode level (-target 1.8, as required by -source 1.8) is only fully supported as of Spring Framework 4.0. In particular, Spring 3.2 based applications need to be compiled with a maximum of Java 7 as the target, even if they happen to be deployed onto a Java 8 runtime. Please upgrade to Spring 4 for Java 8 based applications.
It worked, thanks! For some reason i've missed spring 4 release :-)
This happens to me too, even though the code is still compiled to target 1.7, I only changed the runtime to be java 8. Any ideas?
See ItayK's answer, there is a bug in spring 3.2.8 and below that won't use the right asm version, it's fixed in 3.2.9-
There was another bug fixed in 3.2.10, so I'd recommend going with 3.2.16 or whatever is the latest. Here are the key Spring bugs that were fixed: Metadata reading should never use ASM for java.* and javax.* types (in particular on JDK 8) Java 8: ASM5 visitors required for parsing INVOKESPECIAL/STATIC on interfaces
Thanks ,,it worked by keeping
org.apache.maven.plugins
maven-compiler-plugin
1.7
1.7
instead of
org.apache.maven.plugins
maven-compiler-plugin
1.8
1.8
If you encounter this error even if you compile with -target 1.7, please note that this is because of a bug in Spring Framework which causes ASM classreader to load jdk classes (java.* or javax.*), which are, of course, compiled with -target 1.8.
This, combined with the old ASM version in spring 3.2.8 and below, which does not support parsing of 1.8 class files, can also lead to this error.
More info about the issue can be found here: https://jira.spring.io/browse/SPR-11719
This should be fixed in Spring Framework version 3.2.9, which is due to be released soon.
Of course, upgrading to Spring Framework 4 will also resolve the issue, as it already contains a newer version of ASM.
However, if for some reason you can't upgrade to version 4 yet, it's good to know there's an alternative (soon).
Upgrading to Spring 3.2.9 helped me.
This should be the accepted answer because it clearly explains what the issue is
Had this problem with Spring 3.2.5, changed to 3.2.9 and problem solved. Perfect answer.
Phew, glad I didn't need to upgrade to Spring 4. Great Answer!
I had the same problem,
1.Go to:
maven -> executive maven goal -> mvn clean
It helps :)
2.Invalid caches..
That won't work if you are using the wrong version of Spring.
I had the same problem and solved it. I am using spring 3.x with java 8. If above solutions are not working change the jars and search whether those jars are compatible with the java version you are using or not. spring 3.x is not compatible with java 8.
Basically ... this is what the above answers are saying. So this answer is duplicative.
This problem might because of wrong selection of environments. I tried changing JRE to Java SE 1.8 which is the Java version installed.
Project>>Right click>>Properties>>Java Build Path>>Libraries>>Double click JRE system Library>>Execution Environment to JAVA VERSION INSTALLED.
Spring 4 can be used for java 8 to resolve this issue. I just tested it and it works.
This issue is fixed since Spring 3.2.9-RELEASE version.
Duplicative answer. See https://stackoverflow.com/a/23352356/139985
if you use java 8 or next version you need to upgrade spring version and spring version should be 4.xxx
Duplicative answer. See https://stackoverflow.com/a/23352356/139985
|
STACK_EXCHANGE
|
MD Resubmits from 4/23 and 4/24
We had a series of delivery failures to MD over the weekend.
The following failed to their secondary feed:
0c1c7213-ccf7-4a73-9854-072ba282e85f
f869d495-fb2c-47e1-88b6-d1f7fd0b82a6
a6fdafcd-e4a2-4bd5-8172-858849cb8d6f
The following failed to their primary feed:
40e1a20d-afb9-4dff-968f-bc0a551921bb
7b91b1de-90b1-4413-acf5-28873d44dbcc
b2a7f072-0c11-4b0a-adb3-b442176fdb0b
adb5241b-8ea9-4694-9237-3cfb0e60dd20
7cef17ce-84e0-4814-bd2f-0c9bfeeacef3
509205a4-b91c-4a8c-8da6-6c2ab142b04e
51dc782f-92cb-4a63-bb54-4441affc535d
015ac80b-48bd-4fe1-aa51-51bacb277be5
2aaebee6-5215-4da2-8806-8e81ac57af96
We need to reach out to MD and verify that they didn't receive the reports, and that they want them resubmitted.
@Adrian-Brewster @anshulkumar-usds update: All files have been resubmitted to MD. ( 05/16/2022)
Primary feed:
covid-19-7b91b1de-90b1-4413-acf5-28873d44dbcc-20220423211501
covid-19-40e1a20d-afb9-4dff-968f-bc0a551921bb-20220423191501
covid-19-b2a7f072-0c11-4b0a-adb3-b442176fdb0b-20220423231513
Secondary feed:
covid-19-20220423191502-0c1c7213-ccf7-4a73-9854-072ba282e85f-secondary
covid-19-20220423211502-f869d495-fb2c-47e1-88b6-d1f7fd0b82a6-secondary
covid-19-20220423231514-a6fdafcd-e4a2-4bd5-8172-858849cb8d6f-secondary
covid-19-20220424011531-adb5241b-8ea9-4694-9237-3cfb0e60dd20-secondary
covid-19-20220424031551-7cef17ce-84e0-4814-bd2f-0c9bfeeacef3-secondary
covid-19-20220424051544-509205a4-b91c-4a8c-8da6-6c2ab142b04e-secondary
covid-19-20220424071500-51dc782f-92cb-4a63-bb54-4441affc535d-secondary
covid-19-20220424091500-015ac80b-48bd-4fe1-aa51-51bacb277be5-secondary
covid-19-20220424111539-2aaebee6-5215-4da2-8806-8e81ac57af96-secondary
|
GITHUB_ARCHIVE
|
The Power of Copy and Paste
Join the DZone community and get the full member experience.Join For Free
A reasonably large sized “Enterprise” Java web application ends up being multi-tiered , multi-layered, multi-moduled in that order, with at least a few million dependencies. The task of putting together such an application from scratch is truly daunting.
So how do you go about creating an application from scratch? I have a few favorite techniques that I have faithfully followed over the years:
- Copy an existing working application;start trimming it until I get to the point where I have an “archetype” that I can start back from.
- Start from one of the archetypes provided by maven. Most of the standard maven archetypes (obtained using mvn archetype:generate) are very lightweight, but could be a reasonable starting point.
- Another variation of the above 2 points is to generate your own archetypes – strip down an existing application and create a maven archetype for yourself.
- Use Appfuse as a starter application. Appfuse though is starting to get a little bit dated now, with no updates for the last year
- Use the sample applications provided by the Spring folks as a starting point – say the spring mvc-show case application
All of these approaches follow the common theme of “Copy and Paste” to create an application.
Now, copy and paste helps you kick-start an application, once you have started an application how do you keep adding functionality, after all most of the code structure would look similar – In Java you would add a new entity, a corresponding DAO to manage the persistence of the entity, a service tier, a controller, a view and so on.
The Rails folks did a wonderful job understanding that providing a simple way to kick-start, add functionality and pull in dependencies, is the quickest way to developer nirvana. Well, people who work with Rails are lucky, we Java guys don’t have that option, until now that is (Yes there is Grails, Lift, but these have not been an option in the enterprise shops that I have worked in).
Roo is turning out to be a wonderful alternative. With a few commands , you have a fairly green looking application up and running with all dependencies nicely pulled in. Adding functionality is equally easy; the roundtrip experience from creating an entity, to viewing the entity in a web page almost has the feel of a Rails based development. Well, Roo is still maturing, so it is not an option where I work yet, however that still leaves the “Copy and Paste" as a viable alternative for now!!. A simple way that I follow to add functionality to an application that I am working on is to simply start the Roo console, create the entities in roo, run the push in refactor and then creatively “copy and paste”.
"Copy and Paste" the most under-rated way of expanding code.
Opinions expressed by DZone contributors are their own.
Designing a New Framework for Ephemeral Resources
How To Manage Vulnerabilities in Modern Cloud-Native Applications
Microservices: Quarkus vs Spring Boot
Creating Scalable OpenAI GPT Applications in Java
|
OPCFW_CODE
|
[Linux] XBMCbuntu desktop is empty - Printable Version
+- XBMC Community Forum (http://forum.xbmc.org)
+-- Forum: Help and Support (/forumdisplay.php?fid=33)
+--- Forum: XBMC General Help and Support (/forumdisplay.php?fid=111)
+---- Forum: Linux and Live support (/forumdisplay.php?fid=52)
+---- Thread: [Linux] XBMCbuntu desktop is empty (/showthread.php?tid=138670)
XBMCbuntu desktop is empty - evanb - 2012-08-20 09:33
I just installed XBMCbuntu 11.0 on my Acer Revo RL100 U20P (dual-boot with Win7), configuring it for automatic login. First time I booted XBMC came up as expected. Once I discovered that "Exit" in the menu under XBMC's power-icon button got me to a login screen, I switched over to XBMCbuntu and found...nothing. The desktop is completely blank; no icons, no panels. I can right-click on the desktop to get a very short menu that offers a terminal window, a small configuration dialog for selecting the desktop font (which doesn't appear to change anything) and a couple of other choices like 'reconfigure' and 'exit'. All choices but the terminal and configuration dialog serve only to make whatever displays the menu stop working altogether. Worse, now that I've shut down from XBMCbuntu, that's where my automatic login takes me every time.
Is a blank desktop the intended default condition on the Ubuntu side?
Can someone suggest some packages to install to get a panel with at least a logout button?
RE: XBMCbuntu desktop is empty - neil.j1983 - 2012-08-20 10:56
are you sure you chose "XBMCBuntu" and not "openbox"?
xbmcbuntu looks like http://www.byopvr.com/wp-content/uploads/2012/05/xbmcbuntu.png
RE: XBMCbuntu desktop is empty - flonews - 2012-08-20 16:34
I get the same thing but finally get the trick after 2 reboots ... It doesn't come from XBMCBuntu but from the display screen (my Samsung TV). The TV was set by default to the 16/9 mode and crop pixels all around the picture (a kind of zoom). So I get only the blue background screen with the logo without seeing any menu. Switching the TV in scan mode restore the missing pixels on the display.
RE: XBMCbuntu desktop is empty - evanb - 2012-08-20 21:02
neil.j1983, it's funny you say that because one of the messages from one of the menu selections said something about openbox. (The text is too small to read, which is the problem I'm trying to solve.) What I see looks just like the image at your link, though, minus the panel on the bottom.
flonews, I suspect yours is the right answer. I recall now that had a similar problem with Win7, but its task bar is thick enough that part of it was still visible on the edge of the screen. I fished around off the edges of XBMCbuntu's desktop but did not click anything interesting. With neil.j1983's screen shot as a guide I'll have another go tonight.
Thanks for your help.
RE: XBMCbuntu desktop is empty - neil.j1983 - 2012-08-21 10:58
if the text is too small you need to change your DPI in your xorg.conf.
RE: XBMCbuntu desktop is empty - evanb - 2012-08-21 20:49
The missing panels were, of course, there, but off the screen. Thanks again, flonews, for pointing that out. I also managed to locate the Openbox configuration dialog and set larger fonts, but that seems to only affects the fonts used by desktop context menu itself; the start menu and other dialogs continue to use unreadably-small fonts. neil.j1983, thanks for the additional information. My HTPC has an Nvidia card, so the link is especially pertinent.
As I was sitting on the floor in front of my TV with a keyboard in my lap and a mouse an awkward reach away, it occurred to me that I'd be better off setting up remote access on the HTPC so I can sit at my regular computer in a comfortable chair when I need to do configuration or other computer-like things to it. Once I get a minimally-functional desktop configured to work with the TV I'll install xrdp and likely never visit it again.
RE: XBMCbuntu desktop is empty - flonews - 2012-08-22 10:58
Very happy to help you evanb !
On the XBMCBuntu desktop, I have the opposite trouble: the fonts are too big ! So I will follow the link of neil.j1983.
|
OPCFW_CODE
|
Can ommiting a rebase onto master before merging MR with squash lead to any problems?
We tend to rebase every branch after every MR is merged. It leads to a lot of unnecessary pipelines and so on. But we squash every MR into one commit.
So given there are no conflicts in merge requests, can omitting rebase onto master before merge lead to any problems?
What kind of problems are you worried about? If you ask me, you alread have a problem: In my book, squash merges are not the most clever thing to do. Squashing feature branches erases valuable history and explanation how the project evolved.
No. Rebase + squash leads to the same result as just squash.
I am not worried, my colleague is forcing that upon me. We have clear guidelines for feature branches to be small enough (and tasks themselves) to fit in one commit with JIRA task number and description
This is a great question. You have 2 processes that pretty much conflict with each other, and understanding the nuances of why is interesting. (IMHO...)
You can probably skip the pre-merge rebase.
I can think of 3 reasons one might rebase the source branch onto the target branch before completing a MR (or PR):
There are conflicts and you must update the branch somehow. (Not applicable in your case.)
If you complete the PR with merge --no-ff then the resulting history would be cleaner, resulting in nice little merge bubbles. (Since you squash, this also is not applicable in your case.)
You wish to test the latest and greatest code before merging into the target. Even though it's generally rare for bugs to sneak in without merge conflicts, it's still possible, especially when multiple people are working on the same portions of the code. If you have good unit test coverage perhaps it's quick and easy to re-run them against the latest code prior to completing the MR, just in case.
Obviously #1 and #2 aren't applicable in your case, but if I were in your shoes, and I had good test coverage, #3 would probably be a good enough reason for me to blindly rebase and re-run the tests just in case. If I didn't have good coverage and didn't have an easy way to retest everything, I'd probably take a quick peek and see if I thought it was needed, and if not, I'd skip it.
Side Notes:
Since you're squashing, for both #1 and #3 you can accomplish the same thing by merging instead of rebasing. Choose whichever is easier for you. For example, if you're resolving conflicts and you have many commits, merging is oftentimes faster than rebase since it's one and done, compared to the worst case scenario of resolving conflicts for each commit during a rebase.
"But we squash every MR into one commit." Just a word of caution here. It's perfectly fine to squash feature branches into a target branch, but make sure you don't squash shared branches into each other. That just causes a mess for everyone. If you ever create MR's to merge one shared branch into another, only use a regular merge there.
While squashing may be convenient for minor changes, and for those who make many WIP commits without fixing them up, IMHO when a developer goes out of their way to make multiple good, meaningful commits, then I would prefer having all of those commits in the permanent history, since that's what the developer wanted.
|
STACK_EXCHANGE
|
Keyboard shortcut to run queries
hii , is there a keyboard shortcut to run all the queries from keyboard. otherwise every time I have to click on 'run on active connection'.
If you look at the extension's page within VSCode (i.e. run the command "View: Show Extensions" to get the extensions list in the left-hand sidebar, and then find SQLTools in there), there's a "Feature Contributions" tab on that page, which shows all of the settings and commands for the extension (all extensions have this, it's a very handy reference).
Under "Commands" there are several relating to running queries, although they have somewhat inconsistent naming, and the descriptions aren't brilliant.
Right now, I have:
Name
Description
sqltools.runFromBookmarks
Run
sqltools.runFromHistory
Run
sqltools.executeCurrentQuery
Run Current Query
sqltools.executeQuery
Run Selected Query
sqltools.executeQueryFromFile
Run This File
sqltools.executeFromInput
Run Query
So you should be able to run any of those via the command menu. I find "Run This File" and "Run Current Query" most useful — the former runs all the queries in the current file, and the latter runs the query your cursor is within.
You can set up keyboard shortcuts for all of this too, of course. I assigned ⌘E ⌘F to "Run This File" and I reassigned ⌘E ⌘E from "Run Selected Query" (which I don't find useful as there's no quick keyboard-friendly way to select a query) to "Run Current Query". You can do those tweaks via the Keyboard Shortcuts editor, but I just dropped this directly into my keybindings.json for the same effect:
{
"key": "cmd+e cmd+f",
"command": "sqltools.executeQueryFromFile",
"when": "editorTextFocus"
},
{
"key": "cmd+e cmd+e",
"command": "-sqltools.executeQuery",
"when": "editorTextFocus"
},
{
"key": "cmd+e cmd+e",
"command": "sqltools.executeCurrentQuery",
"when": "editorTextFocus"
},
Since you're asking
is there a keyboard shortcut to run all the queries from keyboard
I'd hope that either of those two would meet your need there.
Hope this helps!
@gimbo thanks for contributing that information.
Whats is the difference between these?
Run Query
Run Current Query
I understand that Run Selected Query should run what ever text is highlighted in the editor as SQL, Run This File runs the complete file but the 2 above I don't know the difference.
Here's my shortcuts configuration:
{
"key": "cmd+r",
"command": "sqltools.executeQueryFromFile",
"when": "editorTextFocus && editorLangId == 'sql'",
},
{
"key": "cmd+r",
"command": "sqltools.executeQuery",
"when": "editorTextFocus && editorLangId == 'sql' && editorHasSelection == true",
},
{
"key": "cmd+shift+r",
"command": "sqltools.executeCurrentQuery",
"when": "editorTextFocus && editorLangId == 'sql'",
}
Pressing CMD + R will execute all the queries in the current file.
Pressing CMD + R when having some text selected will execute only the selected text.
Pressing CMD + Shift + R executes the current nearest single query.
This configuration is similar to how the Navicat Database tool query editor works.
I still dont understand how can I insert this to the settings json
This belongs in keybindings.json
Please use the advice at https://code.visualstudio.com/docs/getstarted/keybindings
|
GITHUB_ARCHIVE
|
Here are 6,989 public repositories matching this topic "webdevelopment"
Repository Created on August 4, 2022, 9:52 am
List of awesome CSS frameworks. With repository stars⭐ and forks🍴
Last updated on September 29, 2023, 9:11 am
Repository Created on February 12, 2022, 11:13 pm
A series of exquisite and compact web page cool effects. With repository stars⭐ and forks🍴
Last updated on November 26, 2023, 7:35 am
Repository Created on December 4, 2023, 4:27 am
🚀 GoalCraft: Achieve Your Dreams with React & TypeScript. GoalCraft is a lightweight project that combines the power of React and TypeScript to help you effortlessly set and manage your goals. With a clean interface and secure local storage, GoalCraft is your go-to solution for turning aspirations into achievements. Start your journey today!
Last updated on December 4, 2023, 4:37 am
Repository Created on September 15, 2023, 11:02 am
The collection of web development projects.
Last updated on September 21, 2023, 10:51 am
Repository Created on June 30, 2020, 4:07 am
Platform to build admin panels, internal tools, and dashboards. Integrates with 15+ databases and any API.
Last updated on December 4, 2023, 4:35 am
Repository Created on February 6, 2022, 11:43 am
css awesome postcss tailwind tailwindcss tailwind-css tailwindcss-plugin framework css-framework awesome-list
😎 Awesome things related to Tailwind CSS. With repository stars⭐ and forks🍴
Last updated on September 8, 2023, 6:31 pm
Repository Created on August 16, 2023, 8:27 pm
A React-based forum tailored for developers to connect and collaborate. 🌙
Last updated on October 26, 2023, 3:50 pm
Repository Created on February 1, 2022, 12:20 pm
list awesome awesome-list webperf webperformance wpo web web-application web-development webdevelopment
📝 A curated list of Web Performance Optimization. Everyone can contribute here! With repository stars⭐ and forks🍴
Last updated on September 8, 2023, 6:31 pm
Repository Created on November 25, 2023, 11:04 pm
A productivity tracker app that Rukaiah, Shaad, and Shevan are developing as part of the TechWise Program web development course, supported by Google and provided by TalentSprint.
Last updated on November 30, 2023, 11:49 pm
Repository Created on December 21, 2020, 10:06 pm
unit-testing python functional-programming codility-solutions linear-algebra algorithms-and-data-structures oop sql seo bash-scripting
A collection of coding scripts, notes, and mini-projects with reference to a series of Data Science, Web Development, programming concepts and foundations, and miscellaneous tech topics.
Last updated on November 15, 2023, 4:04 am
Repository Created on March 24, 2023, 8:33 pm
automation chrome programming-language secure-by-default secure-coding injection injection-attacks mitigation ssrf data-structures
🛡️ The Inox programming language is your shield against complexity.
Last updated on December 3, 2023, 10:54 pm
Repository Created on July 25, 2018, 4:41 am
book store onlinebookstore bookstore bookstorejavaproject java-project online-book-store webdevelopment jdbc book-shopping
The Online Book Shopping Store to manage, buy, add, remove and sell books. Book name and Quantity selection, auto receipt generated and payment options. Login and logout security for both user and admin. Seperate Profile for all.
Last updated on December 4, 2023, 5:26 am
Repository Created on June 19, 2022, 2:40 am
Last updated on December 3, 2023, 10:02 pm
Repository Created on September 24, 2023, 2:57 pm
A template for building web applications in Golang using the Fiber web framework. Kickstart your web development project with this pre-configured template.
Last updated on October 27, 2023, 7:30 pm
Repository Created on October 24, 2023, 4:01 am
A React-based web application that allows users to search for and explore movie information, including details about the cast, and reviews. Built with React and integrated with an external movie information API.
Last updated on November 1, 2023, 11:21 pm
Repository Created on February 19, 2022, 7:17 pm
webdevelopment java standalone-server servlet csrf html-templates language-manager mailing password-security https
autumo beetRoot - A slim & rapid Java web-dev framework
Last updated on February 28, 2023, 5:09 pm
Repository Created on July 31, 2023, 11:39 am
This repository is a compilation of various web development projects and assignments from SuperSimpleDev HTML CSS course.
Last updated on December 3, 2023, 3:55 pm
Repository Created on December 1, 2023, 6:46 am
Checkout the source code to learn how to create a news website in react with multiple pages, reusable components, loading and infinite scrolling. Remember that reading others code can lift your web development game to next level.
Last updated on December 3, 2023, 1:47 pm
|
OPCFW_CODE
|
M: There Was a Time before Mathematica - nswanberg
http://blog.stephenwolfram.com/2013/06/there-was-a-time-before-mathematica/
R: IvyMike
Hey, it's time for me to tell another third-hand half-remembered unsourced
story, just because it's halfway relevant. How's that for a disclaimer?
Anyways, on to the story. I was an undergrad at UIUC in the early 90's, and
the development of Mathematica had created a deep rift in the mathematics
department.
There was resentment because there had been collaboration between those
working _for_ Mathematica, and other professors in the math department, who
were just working on "interesting problems" presented to them by the inner
circle guys.
Of course when Mathematica was released, and money and options were being
given to the inner circle guys, the outer circle guys had, let's say, hurt
feelings.
R: programnature
Half-remembered unsourced negative gossip is what great HN commentary is made
of these days?
R: pfedor
If you don't have a connection to the world of physicists you may not realize
how huge Mathematica is. I have a friend for whom it is the environment of
choice for pretty much everything. Like, when he needs to do some image
manipulation, he doesn't reach for ImageMagick, he does it in Mathematica. If
he needed to set up a web server he'd probably do it in Mathematica too.
People say they can't stand Wolfram's lack of humility. Just get over it for
your own benefit. People have flaws, that's life. If you get so enraged over
his style that you can't listen to what he says, then you won't hear what he
has to say which happens to be a lot of interesting things.
R: bitwize
I was kind of expecting a piece about how stone-knives-and-bearskins
mathematics was before The Wolfram single-handedly brought fire down from
Olympus.
This wasn't that far off, though, so full credit, Wolfy.
R: psychometry
I wonder whether he has ever written a blogpost that doesn't mention receiving
his PhD at 20. I find his self-aggrandizing intolerable.
R: snprbob86
Mathematica is absolutely incredible and everyone who calls themselves a
programmer should own a copy, learn how to use it, and internalize its
philosophy.
However, I too find his self-aggrandizing intolerable.
Sentences like "Looking back at its documentation, SMP was quite an impressive
system, especially given that I was only 20 years old when I started designing
it." Just make me dislike him. Was his age really necessary there? He already
mentioned his age a few paragraphs up in a sentence that was far less
objectionable.
R: davorak
> everyone who calls themselves a programmer should own a copy
I use mathematica regularly, but do not have quite as strong an opinion as
that.
Would you mind sharing your top three reasons or there about?
R: snprbob86
I wrote a little bit about this on HN before:
[https://news.ycombinator.com/item?id=4844502](https://news.ycombinator.com/item?id=4844502)
In short:
1) Term rewriting systems are a beautiful and powerful model of computation
that a lot of people know nothing about.
2) The "everything is data" philosophy is life changing. This same philosophy
can be seen in the Clojure community (there are more than a few Mathematica-
isms that Rich has admitted being influenced by). Mathematica goes further to
say that all data is expressions, which is really a subpoint of #1, but I
think that data is the more fundamental important idea than expressions. Even
though expressions have extremely wide applicability.
3) Having some mastery over the basics of Mathematica is like having a bunch
of secret programming super powers. One time, I came across an exceedingly
complex if/and/or/else clusterfuck and reduced it to a trivial truth table in
only a few minutes of fiddling with Mathematica. There are lots of cases where
experimenting in Mathematica was just a much faster way to understanding and
solving a problem prior to implementation.
R: davorak
Thanks for the response, I have not found it to well matched for all of my
tasks but it is definitely well set up for certain types of programming tasks,
has several high level abstractions and a diverse set well documented
libraries.
R: snprbob86
I haven never tried to write a script or algorithm or anything in Mathematica.
It's not useful for that (at least to me). I use it more to explore and to
understand problems.
That's why I mentioned truth tables. Being able to quickly perform symbolic
simplifications is awesome. If nothing else, learn how to do that!
R: mfonda
Very interesting read. I especially like the following quote:
_I figured if I couldn't explain something clearly in documentation, nobody
was ever going to understand it, and it probably wasn't designed right. And
once something was in the documentation, we knew both what to implement, and
why we were doing it._
I think this a great practice to follow. I often find it very helpful to write
documentation before writing code. I find I end up with a better designed
system this way, and as an added bonus it has great documentation too.
R: nswanberg
Wolfram's list of language design mistakes in SMP is interesting, particularly
how he dropped SMP's symbolic indexing from Mathematica, but still kept echos
of the idea in function definitions:
[http://reference.wolfram.com/mathematica/tutorial/MakingDefi...](http://reference.wolfram.com/mathematica/tutorial/MakingDefinitionsForFunctions.html)
Also interesting is the little decryption challenge (turns out he stored his
copy of SMP in an encrypted form and can't find the key).
R: LowKarmaAccount
> A big early decision was what language SMP should be written in. Macsyma was
> written in LISP, and lots of people said LISP was the only possibility. But
> a young physics graduate student named Rob Pike convinced me that C was the
> "language of the future", and the right choice. (Rob went on to do all sorts
> of things, like invent the Go language.) And so it was that early in 1980,
> the first lines of C code for SMP were written.
So Wolfram appears to be saying that C, the "language of the future", was much
better than that old Lispy stuff he had been using. But then he goes on to
admit that he spent lots of time reinventing features that are obvious in
Lisp:
>It got even weirder when one started dealing with multi-argument functions.
It was quite nice that one could define a matrix with m:{{a,b},{c,d}}, then
m[1] would be {a,b}, and either m[1,1] or m[1][1] would be a. But what if one
had a function with several arguments? Would f[x, y] be the same as f[x][y]?
Well, sometimes one wanted it that way, and sometimes not. So I had to come up
with a property ("attribute" in Mathematica) - that I called Tier - to say for
every function which way it should work. (Today more people might have heard
of "currying", but in those days this kind of distinction was really obscure.)
Really obscure? To who? C programmers?
R: m0nastic
I know that people complain about Wolfram when his posts show up here (which I
understand, even if I've always found his bizarre combination of eccentricity
and stupendous ego endearing), I thought this was super interesting.
Surprisingly, I think he only mentions NKS once in this whole entry, and it
includes a nice shout-out to Rob Pike.
R: e3pi
I've always avoided Mathematica. I Use pari/GP, maxima, axiom, perl's Bignum.
Mathematica, macsyma, Symbolics, I've always felt monetized what should have
been(remained) open source, GNU(etc.) licensed. Thank you again Mr Stallman.
R: sliverstorm
Why should it have been open source? Is quality software not worth paying for?
If you are thinking something along the lines of the "purity" of mathematics,
and how mathematics "want to be free"- mathematics _is_ free, as evidenced by
your use of BigNum, or even pencil-and-paper. Mathematica is just one of the
many calculators, and people have happily paid money for calculators for many
years.
R: mixmastamyk
Fsf proponents are not concerned with price, rather with freedom.
R: tzs
There are many kinds of freedom.
If a proprietary tool lets me get my work done faster than a free tool would,
then using the free tool takes away some of one kind of freedom--namely the
freedom to spend my time doing things I want to do instead of things I have to
do to pay the bills.
R: nwhitehead
Another fun read is the listing of original developers of Mathematica in the
Addison-Wesley "Mathematica" book and what they worked on.
[http://omohundro.files.wordpress.com/2009/03/wolfram88_mathe...](http://omohundro.files.wordpress.com/2009/03/wolfram88_mathematica_developers.pdf)
R: rhodin
"The plot [on the front page] took about three minutes to produce on a Sun
3/260 computer."
Plot3D[Abs[Zeta[x + I y]] , {x, -2, 6}, {y, 2, 35}] takes 0.213483 on my
laptop :)
R: watershawl
I really liked this quote, "I was trying to find the elementary components of
computation...try to pack the largest capability into the smallest number of
primitives." It really captures his life's thesis.
He should be praised not only for his accomplishments, but his commitment and
grit. It's not easy to stick with something for so long like he has and
continue to improve it over time.
R: hudibras
The "Algebra will never be the same again" ad is awesome. It's like an
artist's conception of a mathematician. Or alternately, it looks like that
black-and-white freezeframe in every informercial right before they introduce
the product that solves the problem.
R: omra
Another very interesting part of the article is the call for the decryption of
the SMP source code:
[http://blog.stephenwolfram.com/2013/06/there-was-a-time-
befo...](http://blog.stephenwolfram.com/2013/06/there-was-a-time-before-
mathematica/#runSMP)
R: gcb0
<quote>"Even in my early designs, SMP was a big system. [...] just wanted to
go ahead and implement it. [...] and bought every book I could find on
computer science - the whole half shelf of them. And proceeded to read them all.
I was working at Caltech back then. And I invited everyone I could find [...]
put together a little "working group""</quote>
How on earth does people find the time to do those things while "working"?
What do they mean by Work?
R: codemac
Is anyone else really impressed with his personal archive of documents or
scannings of them?
I've gotta get rid of my drobo FS and upgrade to a serious colo or something.
R: glhaynes
He's got a personal assistant that follows him around everywhere - and perhaps
more who don't. Must be nice!
|
HACKER_NEWS
|
How can I describe someone who feels little or no emotion?
I don't mean someone who lacks emotion because they "don't care", but because either they can't feel emotion or the emotional response is delayed because of a genetic disposition.
Maybe there is an appropriate medical term that could be used.
The word stoic is recommended from a similar question, for example - but that question and its answers relate to an individuals ability to endure/tolerate a situation or simply ignore their emotions.
The answer is simple: "He's a real Spock." Of course that could also mean he is good with children, but, hey, language is ambiguous.
They're often, but not always, sociopaths.
The title and body ask subtly different questions. It is possible for a person to be able to feel emotion without being able to show it.
@jimreed Thanks for pointing that out, I think I fixed it now. I think it's possible for a person to hide/mask their emotions, from what is described in the blunted affect. I am looking for help in describing a person who can't feel emotions similar to Anhedonia (a type of inability).
"My mother? Let me tell you about my mother."
Blunted affect may be the noun, but if you're looking for an adjective to describe someone like that the term is affectless.
affectless : showing or expressing no emotion; also : unfeeling
This is the closest I think so far because of the keyword unfeeling where it could indicate the person has no control of or lacks an emotional response. They are therefore, what an average person may assume, unaffected.
The medical term is blunted affect. A more extreme case is called a flat affect.
From Wikipedia:
Blunted affect is the scientific term describing a lack of emotional reactivity on the part of an individual. It is manifest as a failure to express feelings either verbally or non-verbally, even when talking about issues that would normally be expected to engage the emotions. Expressive gestures are rare and there is little animation in facial expression or in vocal inflection
Thanks for the prompt reply but I'm having difficulty using that in a sentence. I think something more specific would be helpful because this one seems to describe an overall type of phenomenon. I wonder if there is a name for the type of behaviour that I am talking about that doctors or psychologists would use?
Reminds me of a line in Shawshank Redemption.
You strike me as a particularly icy and remorseless man, Mr. Dufresne.
"Cold" (or more poetically, "Icy", as used in the movie) can mean you are not easily affected emotionally, and do not show emotions.
If you're looking for a term to use in everyday conversation (your request for an appropriate medical term aside), it's common to describe someone as being emotionally detached.
Your answer is definitely close. The word detached could also describe how the person is simply ignorant to another persons plight, for example. I think it could mislead the reader if not used with more description.
Possible synonyms:
reserved / suppressed
restrained / self-restrained / self-contained
discreet
overinhibited
dazed
Are you thinking of Asperger's?
Asperger syndrome or Asperger's syndrome or Asperger disorder is an autism spectrum disorder that is characterized by significant difficulties in social interaction, along with restricted and repetitive patterns of behavior and interests
For example, Dustin Hoffman's character in Rainman is supposed to have a disorder on the autism spectrum.
How about nonchalant, indifferent, stoic, expressionless or unconcerned? I am one of those people you describe, and I find these words often describe me quite accurately.
Related to the medical term "blunted affect"
Dysthymia: a mood disorder that has a number of typical characteristics: low energy and drive, low self-esteem, and a low capacity for pleasure in everyday life [...] They will usually find little pleasure in usual activities and pastimes
Alexithymia
is a personality construct characterized by the sub-clinical inability to identify and describe emotions in the self. The core characteristics of alexithymia are marked dysfunction in emotional awareness, social attachment, and interpersonal relating. Furthermore, individuals suffering from alexithymia also have difficulty in distinguishing and appreciating the emotions of others, which is thought to lead to unempathic and ineffective emotional responding
Adjectives to describe someone who is emotionally shallow and lacking in empathy
Insensitive lacking feeling or tact = so insensitive as to laugh at someone in pain
Non empathic to be incapable of recognizing emotions that are being experienced by another person.
Emotionless showing, having, or expressing no emotion
Callous 2. a feeling no emotion b : feeling or showing no sympathy for others
Asperger's is cetainly a good medical diagnosis of the described behaviour. In the vulgate, it could perhaps be "aloof." Pretentiously, you could use use "unclubbable."
Robotic. Zero ability to show emotional highs or lows. Not able to understand or meet the emotional needs of others.
In extreme cases, the word you want could be "sociopath" or "psychopath". Despite fictional portrayals of "psychos" as being twitching, gibbering wrecks, in reality they tend to be very controlled and normal-seeming (hence the cliché of "He seemed so normal- kept himself to himself... " and other bystander-generated camera-fodder).
apathetic
apəˈθɛtɪk/Submit
adjective
showing or feeling no interest, enthusiasm, or concern.
"an apathetic electorate"
synonyms: uninterested, indifferent, unconcerned, unmoved, unresponsive, impassive, passive, detached, uninvolved, disinterested, unfeeling, unemotional, emotionless, dispassionate, lukewarm, cool, uncaring, half-hearted, lackadaisical, non-committal
This is interesting. Lots to use here - "cool" is actually a neat word to use if you could surround it with some pretext as to how it relates to the emotional aspect of things.
@SaultDon: I'm surprised that you like this answer, since most of these words mean don't care rather than can't care, and somebody else mentioned cold three years ago.
Psychopathic or sociopathic are both descriptors of people without normal emotion.
I think you'll find that they have rather specific meanings and are inappropriate merely for describing someone with little emotion.
|
STACK_EXCHANGE
|
import { Node, Schema } from 'prosemirror-model';
import { WikiMarkupTransformer } from '../../index';
export function parseWithSchema(markup: string, schema: Schema) {
const transformer = new WikiMarkupTransformer(schema);
return transformer.parse(markup);
}
export function encode(node: (schema: Schema) => Node, schema: Schema) {
const transformer = new WikiMarkupTransformer(schema);
return transformer.encode(node(schema));
}
export function checkParse(
description: string,
schema: Schema,
markups: string[],
node: (schema: Schema) => Node,
) {
it(`parses WikiMarkup: ${description}`, () => {
for (const markup of markups) {
const actual = parseWithSchema(markup, schema);
expect(actual).toEqualDocument(node);
}
});
}
export function checkEncode(
description: string,
schema: Schema,
markup: string,
node: (schema: Schema) => Node,
) {
it(`encodes WikiMarkup: ${description}`, () => {
const encoded = encode(node, schema);
expect(encoded).toEqual(markup);
});
}
export function checkParseEncodeRoundTrips(
description: string,
schema: Schema,
markup: string,
node: (schema: Schema) => Node,
) {
checkParse(description, schema, [markup], node);
// @TODO Uncomment when encoding is implemented
// checkEncode(description, schema, markup, node);
// it(`round-trips WikiMarkup: ${description}`, () => {
// const roundTripped = parseWithSchema(encode(node, schema), schema);
// expect(roundTripped).toEqualDocument(node);
// });
}
export const adf2wiki = (node: Node) => {
const transformer = new WikiMarkupTransformer();
const wiki = transformer.encode(node);
const adf = transformer.parse(wiki).toJSON();
expect(adf).toEqual(node.toJSON());
};
export const wiki2adf = (wiki: string) => {
const transformer = new WikiMarkupTransformer();
const adf = transformer.parse(wiki);
const roundtripped = transformer.encode(adf);
expect(roundtripped).toEqual(wiki);
};
|
STACK_EDU
|
How the Normal Map baking works, what is the ray direction
I've gone into some trouble understanding how the ray shoot out and the size relationship between the high and low poly mesh, below is my understanding,
But which one is right? From the menual it says the ray is generated from the low poly inwards to the high poly, then I suppose the first illustration is not right, because it points outwards.
The second illustration, the middle section should been working properly, but the left and right section has no high poly surfaces to point to, because it is starting point is inside the high poly, I searched and found someone says I can set the ray distance, then setting the distance won't change where the starting point the ray started, am I right?
The last one is I think how it should work, by having the low poly encapsulating the high poly, but I still needs some basic understanding how blender works with Normal map baking, thanks!
Edit:It turns out the ray will travel in both direction as opposed to what the Doc. says towards inwards.
The above shows the low poly has a cube inside and sphere outside, both have been reflected on the low poly cube.
So first, let's make sure we define what "inwards" means. I'm going to say it means in the opposite direction of your low poly's normals. So if we were to draw your low poly's normals on your graphs above, the normals would be pointing upwards.
As of 2.91.0-ish, rays are shot in the opposite direction of the low poly's normals-- that is, inward. Rays that fail to intersect cause 0.5,0.5,1.0 to be written (which is the same thing as saying, using the low poly's own normals.)
My memory of this in earlier versions (like 2.79, 2.80) was that rays were shot in both directions, but you would still want an enclosed high poly, because rays shot in the direction of the low poly's normals would hit the backfaces of the high poly rather than the front faces, and you'd get screwed up normals.
"Ray distance", just a version or two ago, referred to what is now "Extrusion." This means to start the rays offset in the direction of the low poly's normals. It is exactly the same as baking from a displaced low poly, or baking from a cage with a numeric shrink/fatten operation applied to all vertices. To be explict, this does change the origin of the rays.
"Max ray distance", as implemented a version or two ago, means that Blender will just discard results that arise from intersections greater than a certain distance. I'm not really sure what the intended use for this is, although it can be useful for some texture baking situations that are selected-to-active bakes but couldn't reasonably be described as high-to-low bakes, like baking text objects to a texture or something. This does not change the origin of the rays.
As the Doc. said,"The rays are cast from the low-poly object inwards towards the high-poly object", at least for version 2.9, it should only shoots towards inwards, but I need to have a try. As for ray distance, it changes the origin of the ray, could you elaborate when to use this feasure?
@user3505400 On testing in 2.91.0, high poly in the direction of the low poly normal is now ignored; it just writes 0.5,0.5,1. I'll update the answer. As mentioned, "max ray distance" does not change the origin of the ray; "extrusion" changes the origin of the ray. You can use this to virtually enclose your high poly, by offsetting the origin of the ray in the direction of the low poly normals.
Doc. says "If the high-poly object is not entirely involved by the low-poly object, you can tweak the rays start point with Ray Distance", while I need to have some trials.
@user3505400 Do some tests. Docs are not always right. It is probably referring to the "ray distance" used by earlier versions, which is now called "extrusion". It is not referring to "max ray distance", which is something else, and does not change the ray origin.
Good point, I'll do some test after
I've updated my results, the Doc. might be wrong.
I just want to point out that there is a flaw in the methodology shown in the first picture, with the lowpoly cube capturing the spheres and smoothed cube.
At first glance, this test case seems to show that some rays are being cast towards the outside of the lowpoly cube, since the sphere does appear in the bake after all. And also intuitively, it would be reasonable to think that this captured sphere has seemingly been caught by rays sent along the normals of the closest nearby face of the lowpoly (= the top face of the lowpoly cube, close to the sphere in 3d space).
However from the tests I've been running, this sphere has likely been captured by the rays sent backwards/inwards, from the opposite side of the cube (i.e. the face of the cube that sits on the floor grid in the example). This could be verified by looking at the location of the baked sphere on the UV map of the lowpoly cube.
So from there this confirms that indeed, in recent Blender versions, rays are only sent inward ; and a pushed/"extruded" cage or an extrusion distance value is always necessary to fully capture the surface of a high poly model that sits both above and below the surface of the low.
This is a different behavior from that of Marmoset Toolbag (and possibly older versions of Blender) in which the cageless behavior consists of shooting both ways.
There is really nothing wrong with the current behavior but it is indeed rather counter-intuitive since it means than any cageless bake attempting to capture a subdivided high will only ever capture about half of its surface unless an extrusion value is entered. This in turn gives the impression that Blender baking fails at a simple task that other bakers can do just fine (that is to say : a simple cageless bake at default values attempting to capture a highpoly that is both above and below the surface of the low, which is an extremely common scenario).
And, since other baking environments only require an increase in ray distance value to fix missed rays, the impression of the Blender baker being broken gets reinforced because merely increasing the ray distance will never make highpolies sitting above the surface of the low appear in the bake - only an increase in extrusion value will.
|
STACK_EXCHANGE
|
Can the US government sanction a US citizen?
Imagine John Doe, a US citizen permanently residing in the United States. John Doe engages in questionable but otherwise legal conduct that goes against the current policy of the Department of State. Could the US government then proceed to enact sanctions against John Doe, similar to the sanctions on Vladimir Putin and other foreign citizens?
Sanctions are a political tool. If a citizen is subject to the jurisdiction of the US (which they are constitutionally), then they can use legal tools instead, i.e. arrest them.
@uberhaxed sanctions are often for actions that are very hard (if not impossible) to prove in court. I.e. see the sanctions on Putin's daughters - what crime would they be accused of if they were US citizens?
Putin's daughters were targeted because who they are associated with, not because they are engaging in questionable conduct.
@uberhaxed right but imagine they were US citizens living in the US. Could they still be sanctioned?
What do you mean by conduct against state department policy? That is a very vague term and could easily include something that is against the law.
@JoeW an act against policy might be illegal, but it might also be legal. The question simply excludes illegal acts from consideration. (The exclusion is probably not necessary but should serve to underscore the fact that the sanctions in question are distinct from punishments imposed by a court after a criminal conviction.) The vagueness isn't particularly significant. Think of the question as "is there any conduct for which a US citizen can be sanctioned?" It's more of a broad question than a vague one, but it admits a specific answer, either yes or no.
@phoog That is why I am asking for more details in the question as the details of what the policy in question is have a big impact on the response
@JoeW if there is any policy for which the answer is yes, that is sufficient to answer the question in the affirmative. If there is no such policy, the answer is negative. The details of any specific policy for which the answer might be "yes" or "no" are only of secondary interest to this question and would be best suited to a new one.
@phoog I am not sure that there is much value in a simple yes/no answer to this question though.
Civil forfeiture laws allow federal and state agencies to seize property of US citizens suspected of involvement with crime or illegal activity but without any judicial process or conviction. It's not completely dissimilar.
It does.
If you look at the list of Specially Designated Nationals and Blocked Persons, as published by the Department of the Treasury, you can find a number of US citizens, people with US passports or Social Security Numbers, and businesses registered in the US.
You can do that by searching the document for "citizen United States" to find US citizens or "(United States)" to find US businesses, passports and SSNs.
Under each entry you can find a tag (like [SDGT]). Together with this list you can check under which program a particular person was sanctioned.
As for the legality, I looked at 2 sanction regulations under which US citizens appear on the list:
[SDNTK] - Foreign Narcotics Kingpin Sanctions Regulations, 31 C.F.R. part 598, According to Section 598.314 this is limited to foreign persons engaged in drug trafficking. Foreign persons are citizens or nationals of foreign countries, which may or may not be US citizens or nationals (§598.304).
[SDGT]- Global Terrorism Sanctions Regulations, 31 C.F.R. part 594 has the same definition of foreign person and also includes any person that assist these foreign persons (See 594.201, specifically Subsection (a)(3)).
In all cases I found for US citizens put on the list under these sanction programs, the persons in question are also listed as citizens, nationals or passport holders of other countries. So according to the law establishing the sanctions, they can be put on the list.
Interesting… I wonder how those sanctions are legal.
@JonathanReez I included relevant information on the specific sanction programs and the persons they can be applied to. I am unqualified to comment on the legality/constitutionality of the sanction laws themselves.
@JonathanReez I'm sure there's a rationale somewhere. It may even have been tested and upheld in court. Someone at [Law.SE] probably knows.
If Congress were to attempt to pass a law that targeted a particular individual, this would not be constitutional as it would be a "bill of attainder".
The executive has limited powers to "sanction", as it can only act to enforce the laws created by Congress. Of course, if John has broken some law (or is reasonably suspected of doing so) the Executive can direct the police to arrest John or otherwise enforce the law in such a way that is detrimental to John. But if John hasn't broken any law, then the Executive can't enforce the law against him.
As noted in a comment, sanctions are used politically as leverage against people who are not subject to the jurisdiction of the US, not against citizens, who can simply be sued or arrested if they have broken the law, and have the right to liberty if they haven't.
What if the law was generally applicable to lots of people including some U.S. citizens?
Pretty sure that actions could also include preventing an induvial from doing any business with the government or with any company that does business with the government and that is something that happens all the time.
The USA citizenship itself can be revoked.
This is not often done, but can be done for being a member of certain organizations, dishonorable military discharge, treason or rebellion and dual citizenship under conditions not permitted by U.S. law.
Of course, the activities like that are likely to be classified as formally illegal, but in some cases they may look pretty innocent. Like a girl traveling to join as an ISIS wife may not have any obvious terrorist activities (like fighting with the gun or detonating bombs) committed, seeing all just as a personal love story. Still, her citizenship can be revoked (Shamima Begum, UK case but I see no reason why could not happen in USA).
While it may not be permissible to leave a person without citizenship at all, the citizenship still can be revoked if the person has another one, or could obtain if willing. This is common for the children of immigrants that often are, or could easily become, citizens of the country they parents are from.
After the citizenship is revoked, the former citizen has much less various rights than one previously had.
|
STACK_EXCHANGE
|
On mobile checkout flow should product details be shown on review order/summary step?
I am working on the checkout flow for the mobile device, and it has 3 steps, Step 1- shipping details, step 2- payment details, step 3- order summary and confirm to buy. The flow starts with cart page where user can see product info, edit it, apply coupon code etc and from here can proceed to checkout.
Is it a good idea to show product info again (product image, colour, size, quantity) in the 3rd step of checkout considering the fact that checkout funnel starts with cart page where all the products with full details are shown. I am wondering if the summary page would be overwhelming with too much information to digest thus defeating the purpose of the step? Suggestions?
Single-page checkout. Forget about 3 steps and other nonsense, especially on mobile where users have the least time and patience to deal with loading times.
Actually, e-commerce is the most highly researched area of usability and there are no indications that single page check outs work better. There are a million things you can do to optimize the conversion, but no evidence that single-page vs multi step is one of them.
I agree with Roel in part that Cart Summary > Delivery > Payment is fine. I just wonder if you can make all 3 accessible at all time so it is sequential but you can jump back if you want? Something like this:
download bmml source – Wireframes created with Balsamiq Mockups
So if a user has finished the Cart section and hit 'next' or something then they can still tap the 'Your cart' tab to jump back but preserve the data that has been entered in the later tabs.
I think that your checkout process is clear enough without the order summary as an additional "check" for the customer. The most common process would be the following:
Cart Summary
Delivery
Payment
Confirmation/Thank you
Lots of people use their online shopping cart as an informal shortlist. They are basically curating the contents of their order after putting it together and then decide to act on it or not. If they do, you want to roundup the process as smooth as possible:
Delivery > Payment > Confirmation
Best thing you can do, is
1) Give users brief description/photo and price of the product, which will takes them into Product details screen if they want to - after click
2) Prepare prototype/mockups of the checkout process and just take few rapid usability tests - it will answer your question even better then any declarative studies/surveys etc.
Good luck ;)
I second @dnbrv in the comments. Just do a one page checkout experience, and minimize the fields. Always keep in mind that any checkout experience is a stressful experience (need to put in sensitive information, need to get the address correctly, etc), and you need to do what ever it takes to make it an easy checkout experience.
Amazon implemented an interesting flow: You just need to confirm the your order and pay then and there. They skipped everything else (mainly because they already have everything saved, but this gives you an idea of minimizing the friction).
If you implement a one page checkout experience, I would highly suggest firstly to show the products that they are buying front and center (with the price breakdown and how much it will cost them in the end with taxes, shipping and other fees). Then break it down with shipping information, billing information and finally a confirmation section (summing up everything).
Best of luck!
It's a good idea to give a quick summery on the confirmation page as the user will want at this point an overview of the transaction and to know what is being paid for. this is in fact the goal of this page.
However as Baymard Institute's excellent Check Out Usability Study recommends
Keep the elements around the checkout form to a bare minimum so
customers can focus on paying for their products.
This is how Amazon does it
NO need to show all details. Just Product thumbnail, name and quantity.
Customer has already made up his mind to buy something and that is why he has put it in his cart. Also showing too much information may distract the customer.
This is my opinion. You can consider A/B test. I was doing this study and research for more than 3 months for my company and finally designed a new one which is delivering more business.
Could you elaborate on why you think so, and maybe add some reference to support your statement? Thanks!
Sure. I would like to give you a real life example. You go to the supermarket to buy pasta and red sauce. You go to the pasta section and take one packet or a few and read everything on the pack. Maybe you will check price and ingredients and color and decide which one to buy. Then you will go to the sauce section and pick one. Then you come to the cash counter. The guy will scan it, total is displayed on the counter and then you give him your CC and so on. Please tell me what is the possibility of you checking all the details on pasta packet at cash counter again?
Sorry but this example doesn't match online shopping experience, because online your are not buying everyday things only. Did you bought pasta online recently?
User experience is important. Online or offline does not matter. I buy these things online here in India and once they are in my cart, i do not check item details again. This is where user survey and A/B testing comes into the picture.
You can't just base UX recommendations on your own subjective experiences. You aren't designing the system for yourself.
We all give answers and recommendation based on our own experiences or survey. Survey is again considering opinion from masses. Answer for any question starts with Yes or No. So i answered No. Is it wrong to put my opinion? Did i say my answer is the best answer? What is the problem?
Your answer is fine, I can't find anywhere that says subjective answers are unacceptable (and if they were most answers here would be unacceptable). Like you said it may not be the best answer though, if you can find data backing your point it will certainly help.
Online and B&M experience are different in terms of shopping. In a physical store, you are always 100% certain of 'buying' a product when you are at the 'stage' of cash counter (Similar to Check out phase in online experience).. but in online experience, I am not always 100% certain I will go ahead with my purchase even in check out stage. So I need to know more details in an online experience during check out phase. Thats why both are very much different. For ex: I put things on amazon cart (sometimes as wish list, sometimes to check on shipping etc).. without actually intending to 'buy'
|
STACK_EXCHANGE
|
using System;
using System.Collections.Generic;
using System.Linq.Expressions;
using System.Text;
using DynamicExpressions.Mapping;
namespace DynamicExpressions.Tests.Mapping.Models
{
public class FooEntity
{
public string FieldA { get; set; }
public string Address1 { get; set; }
}
public class FooSummaryViewModel
{
public string Prop1 { get; set; }
// Using as field
public static Expression<Func<FooEntity, FooSummaryViewModel>> Map = i => new FooSummaryViewModel
{
Prop1 = i.FieldA
};
}
public class FooViewModel : FooSummaryViewModel
{
public string Addr { get; set; }
// Concat + using as property
public new static Expression<Func<FooEntity, FooViewModel>> Map()
{
return FooSummaryViewModel.Map.Concat(i => new FooViewModel { Addr = i.Address1 });
}
}
public class FlattenExample
{
public FooSummaryViewModel Summary { get; set; }
public FooViewModel Full { get; set; }
public static Expression<Func<FooEntity, FlattenExample>> Map()
{
var fooMap = FooViewModel.Map();
Expression<Func<FooEntity, FlattenExample>> map = i => new FlattenExample
{
Summary = FooSummaryViewModel.Map.Invoke(i),
Full = fooMap.Invoke(i)
};
var x = map.Flatten();
return x;
}
}
}
|
STACK_EDU
|
This project is a website upgrade for a real-world nonprofit organization. Besides having real users, it differs from the class project nonprofits in that 1) it does not solicit donations, but 2) it provides a large number of informational resources on the web site that need organizing. So this site is suitable for the card sort task. Also, I'm really taking the lead on this and may end up doing an implementation myself - so these documents may be needed to explain to 'civilians' what I'm doing, rather than being technical input to an experienced cross-functional development team. A lot of my time on this project is spent chatting up AJ leaders (to get direction and buy-in) and members (to get user data.)
Competitive Review and Personas
Here's the draft competitive analysis (updated with labels for the numbers, and screen shots):
Even a site like WIND's would be a step up, for Appleton Jobseekers (AJ). But it's an open question right now whether AJ can handle content updates for weekly events, which would be necessary to get to the level of South Bay. AJ would have to switch their implementation to something like Wordpress (from hand-edited HTML (!)), and sort out the leadership team roles, skills, and personalities. I expect to improve the organization scheme and nav tables when I get to that in the design phase -- and I also need to look at some job resource sites as well as AJ's Yahoo page.
A primary persona using Personapp:
Recruiting brief for card sort:
0) card sorting is very sensitive to phrasing, and can definitely be shallow. For instance, the item "AJ Mission Statement" got categorized with bureaucratic functions, whereas if it had been called "Welcome to AJ" it might have come out closer to general meeting information. Instead of trying to be specific, it's important to try to be neutral. "what AJ does" or "purpose of AJ" might have been better choices. Another example would be a participant's category "Groups" which contained all items containing that word even though they weren't otherwise related.
1) so far, the most useful tools are the Participant-Centered Analysis, which is very clear, and the Similarity Matrix, which for this particular set of cards is also very clear at the top level, as well as being very useful for drilling down into the ambiguities. These seem like better starting points for a design than bottom-up crunching through the category standardization, though I may have another go at that later. We'll see what happens at later stages of the design.
2) One reason that results are unclear is that an item might reasonably belong in two different categories; this happens in the current sort and will show up in the final design as duplication of some information and multiple links in other cases. Another reason is multiple groups of participants with different perspectives that might need to be accounted for. Finally, sometimes there simply is no strong categorizing principle and it might be more appropriate to organize on an alphabetic basis or just do the best you can.
3) More participants might be needed in the case where you are trying to sort out multiple populations, for instance, or if you have loads of cards. For the current sort, it really would be useful to run additional sorts on some subparts of the project. Which is a bit different.
There is a rough site map over in the group Balsamiq account. I have a rough spreadsheet too, but am finding the visual site map easier to work with at this stage due to the ease of structural updates. It's just too soon to decide whether something is an info block, a page, or what.
I also have a homepage wireframe in Balsamiq. Next steps are to put more white space in the layout, and create the rest of those dropdown menus.
---- news flash ----
Presentation to management team went well!! I now have some designated project buddies & authorization to proceed toward development (with of course some reviews anticipated.)
---- news flash ----
Here is a new and improved homepage, with menus more fleshed out. Next steps are to iterate on the affinity diagram, site map, and spreadsheet, to consolidate the IA; then probably get some feedback from the organization and users before proceeding to prototyping.
|
OPCFW_CODE
|
Nightly Changelog: 8.0.10-b20200301
Approximate list; there’s a few more changes not listed. This will be the last nightly release of 8.0.10; 8.0.11 nightlies will be available starting tomorrow. 8.0.10 RC1 should be available soon.
16062: Edge History Sync configs lost during upgrade to 8.0.7+:
Fixed an issue where Edge History Sync settings would be lost on upgrade.
15787: Added functionality to copy client tags as XML text, and paste into a separate designer
16021: Margins and Padding support as style attributes
15011: Implement single and multiple subview expansion modes
Added a subviewExpansionMode property. When set to “single”, expanding one subview will automatically close all other expanded subviews.
15866: AST pager not updating correctly on mount, missing MobX reaction
Fixed an issue where the pager buttons on the Alarm Status Table component would not update correctly.
15748: Fix issues with DatasetExcelAdapter
Fixed an issue with system.dataset.toExcel when exporting multiple datasets.
15830: DatasetExcelAdapter#drawSheet liable to NPE when dataset contains nulls
Fixed an issue where system.dataset.toExcel would throw NullPointerExceptions when exporting datasets containing null data.
15976: Notify gateway script project upon legacy upgrade
Fixed an issue where the ‘Gateway Scripting Project’ was not updating correctly during legacy upgrades.
15920: NoSuchMethodError when starting Vision client
Fixed a missing dependency error that occurred when launching a Vision Client.
15950: Edge Sync alarms need to be able to use remote pipelines
Edge gateways with the Sync Services plugin may now use remote notification pipelines.
15503: ‘The start time is greater than or equal to the end time’ easy chart cache error
When the start time is greater or equal to the end time for a query, the Easy Chart will no longer log an error and will bypass the cache altogether.
15796: Add second argument To system.perspective.print for optional Gateway logging
Added new argument to system.perspective.print to specify the destination of the print message (client, gateway, or all)
15734: Reduce red overlays when opening windows.
Reduces the occurrence of red “stale” or “errored” overlays in situations where the initial subscription value has not yet arrived.
15402: Writing a JSON encoded string to a UDT parameter results in a null value
Fixed an issue where writing JSON-encoded strings to UDT members were incorrectly treated as parameter references.
15677: UDT Members that have JSON as values are erroring out
Fixed an issue where UDT members containing JSON-encoded strings were incorrectly treated as parameter references.
15993: NPE When Closing a Flex View While Deep Selected
Prevents an error that could occur when closing Perspective views while deep selected in a non-coordinate container.
15505: Add Local Audit Profile option to standard edition gateways
16045: Mark actions as ineligible while logging in or logging out
Fixed a potential race condition where logging in after logging out can cancel the logout in Perspective
16077: Expose a System Property for Session Cookie SameSite Attribute
System property ignition.http.session.cookie.same-site.enabled can be set to true in order to specify the SameSite attribute on Gateway session cookies (default is false). When the SameSite attribute is enabled, system property ignition.http.session.cookie.same-site.value can be used to set the value of the SameSite attribute. Acceptable values are Strict, Lax, and None (default is Strict). See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie for descriptions for how these values work in web browsers.
11901: Add countersigning to Windows executables
15995: Sluggish performance when loading page with many embedded views
Fixed issue that prevented progressive view loading, making navigation feel sluggish.
15977: Lookup expression returning error quality due to internal NPE
15777: DNP3 point subscriptions stop polling on disconnect
14029: jsonValues structure of dropped UDT instance shouldn’t be affected by the folder structure which it is contained
|
OPCFW_CODE
|
Barrett Technology is a great place to work and develop professionally. The supportive, learning atmosphere is led by seasoned management and competent engineers. We promote from within, investing in our people. Personality and integrity are important at Barrett. The type of person who will do well at Barrett is a team player and a clear communicator. We look for people who are personable, honest, and open to feedback, as well as those who are open to learning new skills, finish tasks in a timely and thorough manner, and who understand that documentation is an essential component of most tasks.
Software Engineer, UX and Game Development
The engineer will be an integral member of a cross-disciplinary engineering and manufacturing team supporting Barrett Technology’s Burt medical robot device (https://medical.barrett.com). Burt is an interactive robotic system that provides stroke survivors a means to exercise and rehabilitate their affected arm in engaging and meaningful ways using a system that physically supports the weight of their arm while allowing them freedom of movement and targeted assistance while playing games on-screen. The applicant should be able to stay organized and on task, work well with little supervision, and be willing to take initiative and act upon new ideas.
Generate concepts and design prototypes of the therapist’s user-interface
Generate concepts and design prototypes of new therapeutic games and assessments for Burt
Generate concepts and design prototypes of patient-data visualization and reporting modules
Develop, test, and release all medical device software through FDA-mandated design-controls
Collaborate with mechanical, electrical, and firmware engineers throughout design and release
Collaborate with sales, marketing, and clinical experts throughout design and release process
Assist in the upkeep and implementation of Quality System tools and procedures including bug-tracking, revision-control, testing, and code-reviews
Support different aspects of a small company’s day-to-day requirements as needed
Potential for travel for customer installations and tradeshows
Required qualifications and proficiencies
Bachelor’s or Master’s degree in engineering, computer science, UI/UX design, or related field
Proficient in C# programming
Experience with Unity
Experience implementing effective user interfaces and GUIs
Personable, self-motivated, and self-directed
Interested in learning new skills and abilities
Highly Valued Skills
Additional programming languages including C/C++ and Python
Experience developing in a Linux environment
Comfortable using version control (GIT, SVN, etc.)
Comfortable working with electro-mechanical systems
Experience with medical-device software
Experience working within a quality-controlled environment
This full time position includes health care and other benefits. Please send a cover letter and resume to firstname.lastname@example.org with the subject heading “Software Engineer, UX and Game Development”.
Applications without cover letters will not be considered.
|
OPCFW_CODE
|
10-08-2018 04:02 AM - edited 10-08-2018 04:12 AM
Here is my testbench
library IEEE; use ieee.numeric_std.all; use IEEE.STD_LOGIC_1164.ALL; use std.textio.all;
entity test_design_1 is end test_design_1; architecture TB of test_design_1 is component design_1 is port ( dclk_in : in STD_LOGIC; eoc_out : out STD_LOGIC; vn_in : in STD_LOGIC; vp_in : in STD_LOGIC ); end component design_1; signal dclk_in : STD_LOGIC; signal eoc_out : STD_LOGIC; signal vn_in : STD_LOGIC; signal vp_in : STD_LOGIC; begin DUT: component design_1 port map ( dclk_in => dclk_in, eoc_out => eoc_out, vn_in => vn_in, vp_in => vp_in ); process variable value_SPACE : character; variable read_col_from_input_buf : line; variable value_TIME, value_VP, value_VN : real; file input_buf : text; begin file_open(input_buf, "design.txt", read_mode); while not endfile(input_buf) loop readline(input_buf, read_col_from_input_buf); read(read_col_from_input_buf, value_TIME); read(read_col_from_input_buf, value_SPACE); -- read in the space character read(read_col_from_input_buf, value_VP); read(read_col_from_input_buf, value_SPACE); -- read in the space character read(read_col_from_input_buf, value_VN);
--how to convert the values here? from real to std_logic
dclk_in <= (value_TIME); vn_in <= (value_VN); vp_in <= (value_VP); end loop; end process; end TB;
mv values are
Here is my design block
I am trying to use .txt for simulation here.
here is my sources
I am trying to assign the value from the file to pins vn_in, vp_in, and dclk and see the eoc as output.
But how do i convert the type here? In the process i want to assign the value to the ports but I am getting confused with the type, because the data are in real type in the file while the actual inpt expected at the port is type 0 or 1.
Help me, thank you.
10-08-2018 09:40 PM
For your another post, I previously said that you need to feed data to input pins in testbench. However it is not the case for this analogy signal pin. Sorry, I don't consider that this is a special case.
For this case, the input pin of vp_in and vn_in is for analog signals in hardware. And in testbench, analogy signals can't be fed to vp_in directly. In port declaration of xadc, the type of vp_in is 'std_logic', it can't be fed by analogy signal.
For simulation of XADC, analog signals are read from a file by the simulation model. The SIM_MONITOR_FILE attribute used in the XADC instantiation points the model to the location of this file known as the Analog Stimulus file. You can find the description of this situation in ug480.
|
OPCFW_CODE
|
using OfficeOpenXml;
using System.IO;
namespace epplus_basic
{
class Program
{
static void Main(string[] args)
{
//Creating empty Excel Package...
var package = new ExcelPackage();
//Referencing the Workbook...
var workbook = package.Workbook;
//Adding a new sheet named "SheetName"...
var sheet = workbook.Worksheets.Add("SheetName");
//Accessing cell A1 using row/col numbers.
//Hint: EPPlus references cells using a base 1 system.
var row = 1;
var col = 1;
//Setting the value of A1 to "Ping?"
sheet.Cells[row, col].Value = "Ping?";
//Setting the value of B1 to "Pong!"
sheet.Cells["B1"].Value = "Pong!";
//Setting the value of A3 to "Hey!"
sheet.Cells["A3"].Value = "Hey!";
//...and immediately changing it's value back to empty (null)...
sheet.Cells["A3"].Value = null;
//Setting cells A1:E1 to bold.
sheet.Cells["A1:E1"].Style.Font.Bold = true;
//Saving the reference to cell A3 to a variable...
var celA3 = sheet.Cells["A3"];
//Saving the reference for the range A1:E1 to a variable...
var rangeA1E1 = sheet.Cells["A1:E1"];
//Saving the file. Passing the filename...
package.SaveAs(new FileInfo(@"sampleEpplus.xlsx"));
//EXTRA
//Creating a new file and already setting a filename
var blankPackage = new ExcelPackage(new FileInfo(@"blankSample.xlsx"));
//Adding a sheet, so we can save it.
blankPackage.Workbook.Worksheets.Add("blankSheet");
//Saving the new file using the original filename.
blankPackage.Save();
//Creating a copy of that file.
blankPackage.SaveAs(new FileInfo(@"blankSampleCopy.xlsx"));
}
}
}
|
STACK_EDU
|
|1 Crore+ students have signed up on EduRev. Have you?|
The only state transition that is initiated by the user process itself is
The only state transition that is initiated by user process itself is block.
Pre-emptive scheduling, is the strategy of temporarily suspending a running process
In pre-emptive scheduling suspanding temporarily a running process is done before the CPU time slice expires.
In Round Robin CPU scheduling, as the time quantum is increased, the average turn around time
The Round Robin CPU scheduling technique will become insignificance if the time slice is very small or very large. When it is large it acts as FIFO which does not have a fixed determination over the turn around time. If processes with small burst arrived earlier turn around time will be less else it will be more.
Suppose that a process spends a fraction p of its time in I/O wait state. With n processes in memory at once, the probability that all n processes are waiting for I/O is
Considering that a process spends a fraction p of its time in I/O wait state. As n processes are there is memory at once, then Probability that all n processes are waiting for I/O = p x p x p x ... p (n times) = pn
Note: Moreover, the CPU utilization is given by the formula CPU utilization = 1 - pn.
In a multi-user operating system, 20 requests are made to use a particular resource per hour, on an average. The probability that no requests are made in 45 minutes is
The arrival pattern is a Poission distribution.
In which of the following scheduling policies doe context switching never take place?
2. Shortest job first(non pre-emptive)
Context switching takes place when a process is preempted (forcefully) and another process goes into the running state. In FIFO and SJF (non- preemptive) techniques, the processes finishes their execution then only their context is switched to other processes.
Which of the following is the most suitable scheduling scheme in a real-time operating system?
In preemptive scheduling the process with higher priority is executed first and then next higher process with least priority is executed after the higher priority processes are executed, if occured simultaneously. This is the ideal technique to be used in real time O.S. in which critical or higher priority processes are to be entertained first.
Which of the following scheduling algorithms gives minimum average waiting time?
SJF will gives minimum average waiting time. SJF ensures that all smaller processes will finish first. So, less processes will be waiting since more smaller processes will end up in less number of time. Use processes waiting means less waiting time overall which will decrease the average waiting time.
A process executes the following code:
The total number of child processes created is
A fork () call creates the child processes for the following loop creates 2n - 1 child processes, excluding the parent process. And including parent it is 2n.
While designing a kernel, an operating system designer must decide whether to support kernel- level or user-level threading. Which of the following statements is/are true?
1. Kernel-level threading may be preferable to user-level threading because storing information about user-level threads in the process control block would create a security risk.
2. User-level threading may be preferable to kernel-level threading because in user-level threading, if one thread blocks on I/O, the process can continue.
Kernel level threading may be preferrable to user level threading because storing information about user level thread in a PCB would create a security risk i.e. with each access to the non-critical services. We are going to the domain where both critical and non-critical services are residing. Any harm in this domain may creates problems to the critical services. Hence Kernel level threading is preferable.
Which of the following should be allowed only in Kernel mode?
1. Changing mapping from virtual to physical address.
2. Mask and unmask interrupts
3. Disabling all interrups
4. Reading status of processor
5. Reading time of day
Only critical services must resides in the kernel. All services mentioned except reading status of processors and reading time of the day are critical.
Consider the following statements with respect to user-level threads and Kernel-supported threads
(i) Context switching is faster with Kernel- supported threads.
(ii) For user-level threads, a system call can block the entire process.
(iii) Kernel-supported threads can be scheduled independently.
(iv) User-level threads are transparent to the Kernel.
Which of the above statements are true?
Kernel level threads can be scheduled independently. For user level threads a system call can block the entire process and are not transparent to Kernel.
Consider the following statements about user level threads and kernel level threads. Which one of the following statements is FALSE?
Which of the following algorithms favour CPU bound processes?
3. Multilevel feedback queues
Only FCFS and non-preempting algorithms favour CPU bound processes.
Consider a system contains two types of processes CPU bound processes and I/O bound processes, what will be the sufficient condition when ready queue becomes empty.
It all the processes are I/O bound, they will go or reside into BLOCKED state and hence READY QUEUE becomes empty.
|
OPCFW_CODE
|
Errors in Dutch Translation
We received the following comments from a developer using the API:
The following two sentences have minor grammatical problems:
Lichte regen vandaag tot op zondag met temperaturen stijgend tot 19°C op zondag
Lichte regen morgen tot van maandag met temperaturen stijgend tot 16°C op maandag.
Literally translated is:
Light rain today till up Sunday with temperatures increasing to 19°C on Sunday.
Light rain tomorrow till from Monday with temperatures increasing to 16°C on Monday.
A better sentence would be:
Lichte regen vandaag t/m zondag met temperaturen stijgend tot 19°C op zondag
Lichte regen morgen tot maandag met temperaturen stijgend tot 16°C op maandag.
I believe the existing translation is from @basvdijk and @realjax... any thoughts on this?
He is right. But I think the way the software constructs these sentences prohibits it.
Cheers
"J. T. L."<EMAIL_ADDRESS>schreef op 28 april 2015 20:02:36 CEST:
We received the following comments from a developer using the API:
The following two sentences have minor grammatical problems:
Lichte regen vandaag tot op zondag met temperaturen stijgend tot
19°C op zondag
Lichte regen morgen tot van maandag met temperaturen stijgend tot
16°C op maandag.
Literally translated is:
Light rain today till up Sunday with temperatures increasing to
19°C on Sunday.
Light rain tomorrow till from Monday with temperatures increasing
to 16°C on Monday.
A better sentence would be:
Lichte regen vandaag t/m zondag met temperaturen stijgend tot
19°C op zondag
Lichte regen morgen tot maandag met temperaturen stijgend tot
16°C op maandag.
I believe the existing translation is from @basvdijk and @realjax...
any thoughts on this?
Reply to this email directly or view it on GitHub:
https://github.com/darkskyapp/forecast-io-translations/issues/32
I agree with the proposed translation to be a better one.
On 28 apr. 2015, at 20:02, J. T. L<EMAIL_ADDRESS>wrote:
We received the following comments from a developer using the API:
The following two sentences have minor grammatical problems:
Lichte regen vandaag tot op zondag met temperaturen stijgend tot 19°C op zondag
Lichte regen morgen tot van maandag met temperaturen stijgend tot 16°C op maandag.
Literally translated is:
Light rain today till up Sunday with temperatures increasing to 19°C on Sunday.
Light rain tomorrow till from Monday with temperatures increasing to 16°C on Monday.
A better sentence would be:
Lichte regen vandaag t/m zondag met temperaturen stijgend tot 19°C op zondag
Lichte regen morgen tot maandag met temperaturen stijgend tot 16°C op maandag.
I believe the existing translation is from @basvdijk and @realjax... any thoughts on this?
—
Reply to this email directly or view it on GitHub.
Thanks.
@realjax - Could you be more specific? Perhaps I can work out a way around the issue.
Again, I'm not sure, but take for instance this part
..tot van maandag..
The 'van maandag' part is also used in the a sentence like:
'From monday until sunday' (Van maandag tot zondag) . If we leave out the word 'van' as the developer rightfully suggested, his example would be correct but this one would be incorrect as it no longer has a translation for the word 'from'
It's a consequence of the fact that sentences in English are not constructed the same way as Dutch ones are. I tried to level those problems out as much as possible for the original Dutch translation but couldn't completely prevent the awkward sentence every now and then.
Well, what I mean is, what is it in the module that prevents fixing this? There is a mechanism which can be used to inspect the context in which a phrase is used so as to be able to determine how to structure that phrase. Would this be sufficient to work around the issue in this case?
(Example: https://github.com/darkskyapp/forecast-io-translations/blob/master/lib/lang/de.js#L100-L109 )
Could be, I'll look into the code when I have time to see if this would work!
|
GITHUB_ARCHIVE
|
Also, provide a blacklist function so that users can manually have more sites blocked, and a whitelist function to manually unblock specified paywall sites. This provides more power to the user to determine what content is provided and what is not.
Exactly. Just looking for that exact setting; can't find it, probably doesn't exist. I will not pay for news when it available everywhere for free, it is a stupid concept. If the ad is sponsored or charges a fee for continued use then I would like to block it. That has no relevance to me and I would prefer not to even see it.
Allow us to hide websites from Pocket
I honestly find it really hard to believe that this hasn't been suggested yet. Especially since I thought Mozilla was a pioneer of giving users control and access to their online experience. SMH.Anyways, with as many websites as there are behind paywalls or who spew far right disinformation, and even just my own personal choice not to want to see any political articles at all for my own mental health when I'm looking at Pocket, we should be able to hide entire domains from being suggested in Pocket.Here are the reasons I can think of on why this is able to benefit users:
Anyways, I hope that others agree. I would love to use Pocket, but I absolutely cannot unless I can control what I see in this way.
Thanks for submitting an idea to the Mozilla Connect community! Your idea is now open to votes (aka kudos) and comments.
Agree..also add an icon to pocket stories that show whether you must subscribe or pay before you waste time clicking on an article.
Add icon to pocket story frame that denotes whether you need to pay or subscribe to read the rest of the article. As someone stated before, it's a waste of time to pay for a story where you may otherwise get for free somewhere else.
Create option to stop Pocket from displaying paywalled content
It's so annoying to run into this. Please set a checkbox that will let us stop this behavior.
Pocket filter lists
Hello,I use Firefox across multiple platforms (Linux, MacOS & Android) and i quite enjoy Pocket articles which are suggested - like to start my day off by learning something new.However, something that has increasingly been annoying me is some sites that rope you in, then only allow you to read the first paragraph of an article as the rest is behind a paywall (in particular: NYTimes, TheAtlantic, etc). Not Mozillas fault i know, but i would like option to filter these - a lot of the time the content looks interesting but when i get to the site i can't see it without subscribing (and i don't always look at the site the article lives on). I have been clicking "Dismiss" for months but they aren't going away, so please can you look into including a "Block Site" button?Cheers
Similar idea here: Provide a setting for pocket that blocks all articles from any site that uses paywalls
Oh yeah, thats exactly the same! My bad for not searching first, apologies 🙂
@SpeckledJim oh no worries, just wanted to make sure! I'll merge the two threads together.
|
OPCFW_CODE
|
Not being able to close the descriptor is a pain in the butt. It's possible that this is a crufty little corner of Linux with no clean solution possible. You may have to make some device-specific calls.
Are you sure the driver for this device is meant to work in a hot-swap mode? Maybe the driver itself is getting confused when you pull the disk.
one last bit of information. I was able to drop into the kernel via kdb during the hang and this is the stack trace:
RSP RIP Function (args)
0x100b2ed9b08 0xffffffff80318138 schedule+0xb6e (0x1, 0x100bf95d240)
0x100b2ed9bd8 0xffffffff80318aab io_schedule+0x26 (0x100bf95d240, 0x0, 0x1,
0x100b2ed9bf8 0xffffffff8015a54c __lock_page+0xbf (0x46, 0x0, 0x40000000, 0x10, 0x1)
0x100b2ed9c98 0xffffffff8015aaa7 do_generic_mapping_read+0x1f4 (0x0,
0x100bbb3fd68, 0x100a341b8c0, 0x100b2ed9f50, 0x0)
0x100b2ed9d98 0xffffffff8015c907 __generic_file_aio_read+0x181 (0x1,
0x100a341b8c0, 0x0, 0xffffffff00000001, 0x100a341b8c0)
0x100b2ed9e18 0xffffffff8015caa2 generic_file_read+0xbb (0x100a341b8c0, 0x1000,
0xf7d47000, 0xfffffffffffffff7, 0x0)
0x100b2ed9f18 0xffffffff80179a97 vfs_read+0xcf
0x100b2ed9f48 0xffffffff80179cee sys_read+0x45
Here's the important bit of that trace:
an inquiry on the disk shows (when its not pulled out):
Vendor Identification : DGC
Product Identification : RAID 0
Revision Number : 0219
Let me know if that is the info you meant. Otherwise I can see what else I can find to be more specific. thanks!
Well, I've been perusing the source code of this driver and it's a fairly large driver. There appear to be a zillion ioctl() calls, and I can't find any clear documentation either in the source code in the form of comments, or in the downloaded archive itself. Honestly I think this driver kind of sucks, but oh well. Try asking around on a qla2xxx-related mailing list to see if there is some ioctl() that either tells you the disconnect status of the drive, or at least configure the timeout so your program doesn't hang forever.
One other thing to try. While your program is hung, kill it with a SIGUSR1 signal. See if it pops out of the hang. If it does, then the alarm() technique I mentioned earlier should at least allow you to get out of the hang.
Last edited by brewbuck; 04-29-2007 at 03:19 PM.
|
OPCFW_CODE
|
Congratulations on winning data scientist of the year 2019, what did you do to qualify?
Thanks very much! Basically I was looking for some award schemes for our wider team to apply to and I stumbled across these awards run by the Data Science Foundation (which I’m a member of and would recommend joining if you’re in this space!). I mentioned them to my boss and she put me forward, which was nice.
Then I had to answer a lot of questions about my background, how I’d got to where I was and what stood out as a big data science achievement for 2019. That year for me was really about consolidating a lot of work I and others had been doing into something more strategic.
Essentially I helped to solve a relatively complicated data science question, managed the productionisation of it and led the stakeholder management piece, which was crucial.
I had also done a lot of work to support our wider data strategy, especially the elements around data science and machine learning. I think it was a good year and I was really pleased to be recognised with the award.
In your role, what's the split between defining what your organisation needs and actually designing and delivering the solution?
With the type of positions I’ve held I think I’ve had to broach both of these in relatively equal measure. Data science and machine learning are extremely powerful tools but a lot of organisations have struggled to gain traction and generate value in this space, I think due to a few misunderstandings about to how to approach these projects.
For example, there’s often a misconception that you get some smart people in a room and some magic happens.
I’ve spent a lot of time in the organisations I’ve been part of educating who I can that really it’s still about the same things as for other tech projects: process, clarity of purpose and solid business cases.
For the UX design and research community: what part do we play in creating machine learning technology? For example - interviewing end users or designing visualisations.
I think this is a massively overlooked area. A lot of things I build are backend processes that feed into other systems but at the end of it a user has to interact with the result in some way or another. You could argue this is just standard UX for whatever that solution is, but I think the unique aspects of ML have to be incorporated so that you don’t lose users on their journey.
For example, how do you communicate effectively that an ML algorithm’s results are not certainties but often probabilistic in some sense? How do you get users to provide feedback to use as more training data? How do you ensure that any suggestions or insights being provided augment their workflow and don’t detract from it? And of course, like you mentioned, how do we effectively visualise outputs so that anyone can understand what the results say? I think there’s an assumption that this is easy and it can be incorporated into the jobs of the people building the algorithms, but that’s a mistake in my view. Give it to the experts, with the time, tools and appropriate focus and you’ll create something special.
I'm personally interested in learning https://ml5js.org/ - an open source framework for ML which includes image recognition and NLP. What tools or language would your recommend for designers? Do you use any yourself?
For recommendations it depends what you want to do. If you are looking to embed ML in front end applications then frameworks like ml5js or similar fit the bill. If you want to extend this out to the typical ML workflows you’d be looking at the usual Python/Scala (and now languages like Julia) ecosystems. For large scale compute it’s all about Spark and that ecosystem. As I say depends what you are after.
Having said all that what I am super interested in is always building amazing services that other solutions can consume from, including beautiful front ends and visualisation tools. So I’m always keen to hear if anyone has tips for making this interaction more seamless.
In your opinion, what's likely to be the biggest development in ML in 2021?
In general, across the enterprise, my hope and belief is that MLOps will start to mature quite substantially and we will see some real examples of excellence in this space (including at my current employer!).
MLOps is the incorporation and application of ideas from DevOps to the world of ML and is so important in my mind for successful products that can run stably for years.
I’m also pretty excited by the prospects of algorithms on graphs, I know this is a huge area of interest across many industries and I can particularly see it’s relevance to finance (where I work). Applying algorithms to find connections or understand relationships between entities and processes to me is really impressive, valuable and also just pretty cool!
Finally, I’m hoping that Reinforcement Learning in real world applications starts to take off a bit more. I’ve built a few prototypes using these techniques in the past but taking them through to production is fraught with challenges, and I hope that the community makes some strides in this direction (and I hope that I can help)!
|
OPCFW_CODE
|
Zorro Trader for Algorithmic Trading with Dhan ===
The world of financial trading has been revolutionized by the advent of algorithmic trading, which allows traders to execute high-frequency trades with precision and efficiency. Zorro Trader is a powerful and versatile platform that enables traders to develop, test, and execute automated trading strategies. When combined with the advanced capabilities of Dhan API, this integration offers a seamless trading experience, maximizing profit potential in the dynamic world of financial markets.
=== Features and Benefits of Zorro Trader for Algorithmic Trading ===
Zorro Trader is a comprehensive platform that provides a wide range of features to support algorithmic trading. One of its key benefits is the ability to develop and test trading strategies using a simple scripting language, making it accessible to both novice and experienced traders. Zorro Trader also offers a suite of technical indicators and charting tools, allowing traders to analyze market data and make informed decisions. Additionally, it provides real-time market data and supports multiple asset classes, including stocks, futures, and forex, giving traders the flexibility to diversify their portfolios.
The integration of Dhan API with Zorro Trader further enhances the platform’s capabilities. Dhan API offers access to a vast array of financial data and advanced trading functionalities, enabling traders to make more informed trading decisions. Integration with Dhan API allows traders to access real-time market data, execute trades with low latency, and monitor their portfolios seamlessly. This integration empowers traders with the ability to capitalize on market opportunities promptly and effectively, giving them a competitive edge in the fast-paced world of algorithmic trading.
=== Integration of Dhan API with Zorro Trader for Seamless Trading ===
The integration of Dhan API with Zorro Trader is a game-changer for algorithmic traders. Traders can connect their Zorro Trader platform to Dhan API easily, allowing them to access real-time market data and execute trades directly from the platform. The integration provides traders with a smooth and efficient workflow, eliminating the need for manual data entry and reducing the risk of errors. With this seamless integration, traders can focus on developing and refining their trading strategies, confident in the reliability and accuracy of the data provided by Dhan API.
=== Maximizing Profit Potential with Zorro Trader and Dhan: A Case Study ===
To illustrate the power of the integration between Zorro Trader and Dhan API, let’s consider a case study. A trader develops an algorithmic trading strategy using Zorro Trader, taking advantage of the advanced technical analysis tools and real-time market data provided by Dhan API. With the seamless integration, the trader can backtest the strategy and fine-tune it based on historical data, optimizing the performance. Once satisfied with the results, the trader can then execute the strategy in a live trading environment, leveraging the low latency trading capabilities offered by Dhan API. This integration allows the trader to maximize profit potential by automating the trading process and capitalizing on market opportunities swiftly and accurately.
In conclusion, the integration of Dhan API with Zorro Trader opens up a world of possibilities for algorithmic traders. The combination of Zorro Trader’s comprehensive features and Dhan API’s advanced trading functionalities creates a powerful and efficient trading platform. Traders can benefit from real-time market data, low latency trading, and seamless execution of trading strategies. With this integration, traders can maximize their profit potential, gaining a competitive edge in today’s fast-paced financial markets.
|
OPCFW_CODE
|
Localhost internal-page links not working (razor html page)
accordion on localhost
What I am trying to do is run an accordion in a Blazor razor html page in a project that I am running locally. Here is the code for that accordion (Bootstrap css), which works perfectly well in an online editor:
<div class="panel panel-default">
<div class="panel-heading">
<h5 class="panel-title stath5" id="farmh5">
<a data-toggle="collapse" href="#collapse1"><span class="badge badge-secondary">4.</span> Farm</a>
</h5>
</div>
<div id="collapse1" class="panel-collapse collapse">
<ul class="list-group">
<li class="list-group-item">
<p class="statp"><span class="badge badge-info">Quantity of:</span></p>
</li>
<li class="list-group-item">
<label class="statlabel" for="beefCattle"> Beef Cattle:</label><br />
<InputNumber id="beefCattle" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Beef cattle..." autofocus />
</li>
<li class="list-group-item">
<label class="statlabel" for="dairyCattle"> Dairy Cattle:</label><br />
<InputNumber id="dairyCattle" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Dairy cattle..." />
</li>
<li class="list-group-item">
<label class="statlabel" for="horses"> Horses:</label><br />
<InputNumber id="horses" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Horses..." />
</li>
<li class="list-group-item">
<label class="statlabel" for="hogs"> Hogs:</label><br />
<InputNumber id="hogs" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Hogs..." />
</li>
<li class="list-group-item">
<label class="statlabel" for="sheep"> Sheep:</label><br />
<InputNumber id="sheep" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Sheep... " />
</li>
<li class="list-group-item">
<p class="statp" id="perBunch">Number of birds <b>per bunch:</b></p>
</li>
<li class="list-group-item">
<label class="statlabel" for="hens"> Hens:</label><br />
<InputNumber id="hens" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Hens per bunch..." autofocus />
</li>
<li class="list-group-item">
<label class="statlabel" for="breeders"> Breeders:</label><br />
<InputNumber id="breeders" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Breeders per bunch..." />
</li>
<li class="list-group-item">
<label class="statlabel" for="pullets"> Pullets:</label><br />
<InputNumber id="pullets" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Pullets per bunch..." />
</li>
<li class="list-group-item">
<label class="statlabel" for="broilers"> Broilers:</label><br />
<InputNumber id="broilers" class="input"<EMAIL_ADDRESS>onwheel="this.blur()" placeholder="Broilers per bunch..." />
</li>
</ul>
</div>
</div>
</div>
HOWEVER, when I run the blazor page on localhost and click the accordion link, the path https://localhost:44301/#collapse1 shows up and the accordion does not open. Is this fixed when the accordion is opened from a real website rather than localhost? How so? How can I see it from localhost and resolve this issue?
Any assistance would be invaluable
Any hints from the browser console?
How do you mean - I'm not seeing anything special
F12 to access developer tools, or Command+Option+C on Mac. Click on the console tab.
Yes, but I don't see anything suggestive there
You can also try running it in codepen and if it works there, compare the generated CSS with the generated CSS on your site and see if they differ for some reason.
If there's any other way to resolve it I would appreciate that
It would be a nightmare to move all of the code and imports to CodePen
`https://localhost:44301/#collapse1` is this correct address?
There is no file name between slash and anchor - dead end
I can be wrong, I don't use localhost, can't check it. It was my first impression, but if machine knows exactly where is it, it can handle this. And without an anchor opens a file? Can you create structure of directories and files different types for test?
How would I go about adding the filename, it's in localhost
Blazor intercepts nav links, but not if you specify a target
<a data-toggle="collapse" href="#collapse1" target="_top">
Also, bear in mind that if this is not on the "/" route in your app, that you need the route in the href.
For example on a route of "/mypage" your links need to look like this
<a data-toggle="collapse" href="mypage#collapse1" target="_top">
Thank you, finally I can get on the project!
Ok, thought that would work. Guess I was wrong. When I click the accordion link:
https://localhost:44301/statistics#collapse1
And nothing changes. The accordion still will not open on localhost
Presumably you have some JS code that does the actual collapsing because an href link will not do it alone. Do you see any errors in the Dev tools?
No, I'm using a Bootstrap 4 (which is tested and works). There is one error in the console, about unhandled circuit
Any ideas about it?
|
STACK_EXCHANGE
|
sqlite DB to-do during iphone app update
I have some general questions about iphone app updates that involves sqlite db.
With the new update does the existing sqlite db get overwritten with a copy of the new one?
If the update doesn't involve any schema changes then the user should be able to reuse the existing database with their saved data, right? (if the existing database doesn't get overwritten from 1 above )
If there are some schema changes, what's the best way to transfer data from the old database into the new one? Can some one please give me guidelines and sample code?
Also excellent answer: http://developer.appcelerator.com/question/127927/will-app-store-update-force-update-of-the-sqlite-database-too#
Only files inside the app bundle are replaced. If the database file is in your app's Documents directory, it will not be replaced. (Note that if you change files inside your app bundle, the code signature will no longer be valid, and the app will not launch. So unless you are using a read-only database, it would have to be in the Documents directory.)
Yes.
What's best depends on the data. You're not going to find sample code for such a generic question. First, you need to detect that your app is running with an old DB version. Then you need to upgrade it.
To check versions:
You could use a different file name for the new schema. If Version2.db does not exist but Version1.db does, do an upgrade.
You could embed a schema version in your database. I have a table called metadata with a name and value column. I use that to store some general values, including a dataversion number. I check that number when I open the database, and if it is less than the current version, I do an upgrade.
Instead of creating a table, you could also use sqlite's built-in user_version pragma to check and store a version number.
You could check the table structure directly: look for the existence of a column or table.
To upgrade:
You could upgrade in place by using a series of SQL commands. You could even store a SQL file inside your app bundle as a resource and simply pass it along to sqlite3_exec to do all the work. (Do this inside a transaction, in case there is a problem!)
You could upgrade by copying data from one database file to a new one.
If your upgrade may run a long time (more than one second), you should display an upgrading screen, to explain to the user what is going on.
Minor note on version checking: instead of using a separate table for the schema version, you could also use the built in "user_version" pragma (http://www.sqlite.org/pragma.html#pragma_schema_version).
Oh, cool! I wish I had known about that. Will edit my answer.
version1.db and version2.db upgrade etc. so hard? we have any other options?
1) The database file isn't stored as part of the app bundle so no, it won't get automatically overwritten.
2) Yes - all their data will be saved. In fact, the database won't get touched at all by the update.
3) This is the tricky one - read this fantastically interesting document - especially the part on lightweight migration - if your schema changes are small and follow a certain set of rules, they will happen automatically and the user won't notice. however, if ther are major changes to the schema you will have to write your own migration code (that's in that links as well)
I've always managed to get away with running lightweight migrations myself - it's by far easier than doing it yourself.
The link in #3 only applies to using Core Data. The question seems to be about using SQLite directly.
the document refers to data migration using core data ... i'm looking for something like direct data transfer from sqlite db (new db) from sqlite db (existing with users data)
What I do is that I create a working copy of the database in the Documents directory. The main copy comes with the bundle. When I update the app I then have the option to make a new copy over the working copy, or leave it.
what if you need to transfer data to the new copy ? are there any sample migration scripts you can point me to ?
I generally run two databases, a static one which keeps all the relatively static tables, and a dynamic one. So migration is generally not an issue because I update only the required database. Migration has not been an issue.
|
STACK_EXCHANGE
|
Sleuth not reporting client spans with Feign builder
First if you think this issue would have been better in StackOverflow or other then forgive me and tell it so I'll know for next time.
Describe the bug
When using Feign.builder() to create a BuilderFeign build with ribbon client LoadBalancerFeignClient I notice that the client spans are not sent to zipkin.
I noted the documentation saying 'nothing created manually will work' yet I see some instrumentation.
TraceFeignObjectWrapper.wrap handles LoadBalancerFeignClient and - from my understanding - TraceLoadBalancerFeignClient creates a span - that should be sent downstream - and abandon it at the end. On the other hand, for other feign clients, TracingFeignClient performs the receive.
Given the specific code I suppose there is a rational behind this choice but I do not understand it.
Could you explain it?
I managed to have my client reporting client spans by creating a class that extends TraceLoadBalancerFeignClient and mimic the content of TracingFeignClient but I'm surprised that this is needed.
Sample
A sample app with these classes reproduces the behavior:
public interface BuilderFeign {
@RequestLine("GET /")
String pingGithub();
}
@Configuration
public class BuilderFeignConfig {
@Bean
static BuilderFeign blotterFeignClient(ObjectFactory<HttpMessageConverters> messageConverters,
Client loadBalancerFeignClient){
Request.Options options = new Request.Options(30000, 30000);
return Feign.builder()
.encoder(new SpringEncoder(messageConverters))
.decoder(new SpringDecoder(messageConverters))
.decode404()
.retryer(NEVER_RETRY)
.options(options)
.client(loadBalancerFeignClient)
.target(BuilderFeign.class, "https://api.github.com");
}
@Bean
public Client loadBalancerFeignClient(CachingSpringLoadBalancerFactory cachingFactory,
SpringClientFactory clientFactory, BeanFactory beanFactory,
HttpTracing httpTracing) {
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(20);
return new LoadBalancerFeignClient(new Client.Default(null, NoopHostnameVerifier.INSTANCE), cachingFactory, clientFactory);
// client spans are reported with this implementation
// return new CustomTraceLoadBalancerFeignClient(httpTracing, new Client.Default(null, NoopHostnameVerifier.INSTANCE), cachingFactory, clientFactory, beanFactory);
}
}
@GetMapping("/builderFeign")
public ResponseEntity<?> getFromBuilderFeign() {
return new ResponseEntity<>(this.builderFeign.pingSgithub(), HttpStatus.OK);
}
My class CustomTraceLoadBalancerFeignClient is a copy of TracingFeignClient + other needed copies (due to closed visibility) except the call to this.delegate.execute(modifiedRequest(request, headers), options) is replaced by super.execute(modifiedRequest(request, headers), options)
Which version of Sleuth are you using?
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
sorry for the missing part.
this version was made with spring boot 2.2.5.RELEASE and spring cloud Hoxton.SR2
Can you please check if the problem still persists with the latest 2.2.x and 2.3.x ?
I checked and still reproduce the issue with
boot 2.3.2.RELEASE and cloud Hoxton.SR6
boot 2.2.9.RELEASE and cloud Hoxton.SR6
The workaround is still valid too. Thus I'm puzzled by the current implementation of TraceFeignObjectWrapper.wrap
AFAIR that's because we wrap the delegate and we don't want to wrap twice - the LoadBalancerFeignClient and its delegate.
Can you create a sample with your code?
My bad. While creating the sample free of corporate stuff I found out I did not test properly earlier.
The client spans are now reported correctly with latest versions.
Thanks.
|
GITHUB_ARCHIVE
|
In this article, we will discuss HTML attributes, which are one of the essential topics in HTML.
In HTML, the attributes are used to provide additional information about the elements. For most of the HTML elements, attributes are optional, but there are some elements where we have to deliver the attributes. Attributes always specified within the opening tag of the HTML element and they specified in a name and value pair.
<image src= "cat.jpg" alt ="cat image">
In this example
are two attributes where src and alt are attributes name and "cat.jpg" and "cat image" are attributes values. Here
attribute is optional, but
is mandatory because
specify which image to show. There should be at least one space gap between two attributes, and the value of the attributes must have resided in the double inverted comma.
Some most important HTML element attributes:
<a> Element href Attribute
<a> anchor tags are used to create links on the web page and the href attribute specify the address of the link.
<img> src Attribute
src is a mandatory attribute that must be passed along with
tag, it specifies the image location that supposed to display on the web page.
<img src ="dog.jpg">
img Width and Height attributes
In the image
tag, we can also pass the width and height attribute to size the image.
<img src ="dog.jpg" width="500" height ="700">
here width ="500" means the image will be 500 pixels wide.
alt attribute can be used with various HTML elements but it mostly used with
element. In case if the browser fails to load the image, the text of alt will be displayed on the screen. Even the screen reader app can read the alt information, so a visually impaired person can understand what image context is.
<img src="cat.png" alt="black cat" height="200" width ="300">
Style attribute is used to provide inline styling to the HTML elements. I t is an inline alternative for CSS coding. style attribute can be applied on any HTML element, and it mostly used to change the element font size, colour, style, etc.
<h1 style="color:red">Welcome to TechGeekBuzz.</h1>
This attribute defined inside the <html> tag and it describes the language of the document content.
This attribute can be used with various HTML elements, and you can see its value when you hover your mouse over the element content.
<h1 title="techgeekbuzz"> Hover over me! </h1>
Points to remember
While writing the attributes, there are some points; we need to keep in mind to keep our code good.
|Bad Code||Good Code|
|Always use Quotes for attribute values|
|<a href= h ttp://www.techgeekbuzz.com >||<a href="https://www.techgeekbuzz.com">|
|Use lowercase characters to represent the attribute name.|
|<a HREF = "https://www.techgeekbuzz.com">||<a href="https://www.techgeekbuzz.com">|
|Always provide one space to write the next attribute.|
|<img src="img.png"width="500"height="200">||<img src="img.png" width="500" height="200">|
If you try to run this bad code, this will give us the same result like the Good code, but it always a good practice to keep your code clean and systematic so you and other developers could read and understand it.
- HTML attributes provide additional information about the HTML elements.
- Every HTML element has attributes.
- There are some attributes which are specific to some tags, and there are some attributes which can be used with multiple tags.
- Always use double or single quotes to represent attribute values.
- Always use the lowercase character for attribute names.
|
OPCFW_CODE
|
Was reviewing a sample configuration for a DHCP Server in the Juniper SRX Series Book by O'Reilly. In the book they want to provide dhcp to ports 1-7 on a SRX 100. The dhcp server is setup, but the propogate settings command is set to the interface of fe-0/0/0.0. This is the outside interface aka untrusted interface. Why would you place this setting here and, how would it propogate to the other ports. Is it because 0.0 is the default VLAN? Normally when I setup a DHCP server on a SRX I propogate it to the sub interface or specified VLAN. What are the security concerns with setting the propogate to interface 0.0?
That configuration comes as factory default configuration since fe-0/0/0.0 is in default VLAN . You can change the configuration according to your conveninece and need . Its not mandatory to use the same factory default configuration .
Enable or disable the propagation of TCP/IP settings received on the device acting as Dynamic Host Configuration Protocol (DHCP) client. The settings can be propagated to the server pool running on the device. Use the system services dhcp statement to set this feature globally. Use the system services dhcp pool statement to set the feature for the address pool and override the global setting.
an example is using the dns servers you get by dhcp from your ISP and reuse them on the "internal" dhcp scope. So when the ISP decides to changes them they will automaticly be changed on your lan also
The below URL will help you to understand the working of DHCP propagate-settings option.
You know? This really never got a good answer. Suraj's link to the seemingly only other page dealing with the topic was ... unimpressive. Yeah, the gent showed his commands but it really didn't deal with the question either --and it is from 2010.
The TOPIC of how the command works has had scant dialogue.Frankly, why use it at all? Rhetorical, but still.Who can explain, with some detail and eloquence and not just link to a page as equally non-elucidating? With all due respect ...
Sorry for all the confusion. I think what you are asking is what does the propagate setting on the trust dhcp server pointing to the untrust interface actually do.
This setting will get the DNS servers that the untrust dhcp setting receives from upstream and use these same dns servers as the setting to give to dhcp clients on the trust interface.
The second definition to the word "propagate" tells you what this does: "spread and promote".
The propagate-settings clause takes data from where the DHCP lease was originally received, and uses it when clients query for DHCP on other interfaces / zones.
So the reason ge-0/0/0 is the default is that's where your ISP would connect (at least in Juniper's instructions but you can connect it wherever). Items you want to be propagated are done automatically, any overrides, you need to specify. So you'll want your network IP range/CIDR, "inside" router address, etc. to be different, but things like SIP settings, or DNS settings, Connection-specific DNS Suffix, etc. from your ISP will be propagates (spread and promoted) from the untrusted interface to the trusted interface(s).
|
OPCFW_CODE
|
[Question] Configurable rule to check any schemas within responses
Apologies if there was a better place for this.
I'm trying to write a rule that doesn't allow the enum property within API responses, but does allow it within API requests.
Either of these options definitely work if I wanted to disallow the enum property everywhere:
rule/no-enums:
subject:
type: Schema
property: enum
assertions:
defined: false
rule/no-enums:
subject:
type: any
property: enum
assertions:
defined: false
But I'm completely struggling to write something that only targets Schema nodes referenced from anything under a Response node. Among other things, I've tried:
rule/no-enums-in-responses:
where:
- subject:
type: Response
assertions:
defined: true
subject:
type: Schema
property: enum
assertions:
defined: false
and even enumerating all the nodes between Response and a Schema:
rule/no-enums-in-responses:
where:
- subject:
type: Response
assertions:
defined: true
- subject:
type: MediaTypesMap
assertions:
defined: true
- subject:
type: MediaType
assertions:
defined: true
subject:
type: Schema
property: enum
assertions:
defined: false
I've also tried things like:
rule/no-enums-in-responses:
where:
- subject:
type: Response
filterInParentKeys:
- '200'
assertions:
defined: true
subject:
type: Schema
assertions:
disallowed:
- enum
I'm at a loss. I'm even trying to run these on a bundled version of my spec so there are no references to possibly mess things up. Thanks in advance for any help!
This is interesting.
Schema could contain other schemas, and, in fact, in your example, you refer to a nested schema.
If you aren't using the where clause, it propagates further down the nested schemas.
The problem here is that whenever you use the where, it stops at the first Schema level it encounters after Response (which, in your case, doesn't contain an enum).
I'm not entirely certain if we want to change the behaviour as it will change how the existing rules work (it might actually be useful for some). I have to think about this more.
As a workaround, you can write a custom rule through a plugin for that case. However, it requires some knowledge of JavaScript.
A different workaround might be adding another level of the where funnel:
rule/no-enums-in-responses:
where:
- subject:
type: Response
assertions:
defined: true
- subject:
type: SchemaProperties # <-- this ensures we bypass the first level of Schema and check only on the 2nd level
assertions:
defined: true
subject:
type: Schema
assertions:
disallowed:
- enum
Please keep in mind that the rule will only check for enums in the first Schema inside of the first SchemaProperties.
Let me know if that helps.
Hey, thank you for the response!
I'm getting by with just blocking enum everywhere and adding exceptions for enums in requests via the redocly ignore file. And I think, for now, it's enough to know that this isn't possible really out of the box.
FWIW my motivation here is that enums in responses are risky from a breaking-changes perspective, but they're not as risky when they're in requests (adding to an enum is a breaking change in a response, but adding to a enum in a request is not). I would expect that it's not uncommon for APIs to have different expectations for schemas in requests vs. responses.
|
GITHUB_ARCHIVE
|
Date: 04-04-06 03:04
I am obviously a sub newbie, but here goes...
I'm thoroughly confused by the permission system in FreeBSD and need some clarification.
I'm trying to install Zope and have been advised to install it as a user not as root and also not as nobody, for security reasons - the idea being that if another program is running as nobody and is breached, then an intruder might gain access to Zope's database. And I guess this also means that if Zope was breached and it was owned by root that would be even worse. Right?
OK, so if I install Zope as a user I create called 'zope' and make that user belong to the group 'zope' does that cause any problems like zope not being able to run programs not owned by 'zope'? I was imagining that if Zope wasn't a member of the wheel group, then I wouldn't be able to su to root if I needed to at some point, or is making Zope a member of the wheel group a security risk, and if so why?
Also, one set of installation instructions I ran across (on the Plone site) indicated that it was important to install as a regular user (not root), but NOT as the user Zope would ultimately run under, for security reasons. So why can't I just install Zope as user 'zope' ?
This is where more confusion lies: I noticed in the makefile from the zope 2.7.8_1 port that the "zope user" is set to be 'www'. But I thought users were real people. Also Apache is frequently set to run as user 'www'. What's going on?
Now for more confusion. Zope runs on Python. My system (PCBSD) already has Python 2.4 installed but the latest version of Plone requires Zope 2.7.8_1 which only runs on Python 2.3. So the Zope port installed Python 2.3 in the same /usr/local/bin as version 2.4. Is this going to cause a problem? If I have to uninstall Python 2.3 and reinstall as user 'zope' or my regular account, will some files of the same name common to Python 2.3 and 2.4 get erased? This is potentially a big problem as I don't know how PCBSD installed Python 2.4. If I had to reinstall version 2.4 I wouldn't know what compiler switches to use for optimum performance. I'll be damned if I have ot reinstall PCBSD too! Should I use 'pkg_delete' or 'make deinstall' in the port tree? What's the difference?
I originally installed Zope as root, so Python 2.3 also got installed that way. Now I uninstalled Zope so as to do it over as user 'zope' but haven't yet uninstalled Python to reinstall as another user, presumably 'zope'. Is this necessary or can I just chown Python and Zope?
Wait, there's more! Zope instructions talk about setting SUID bit in the zope.conf file. What's that for?
I'm confused as to what users are, and if programs get installed in different directories depending on who I am when I install them and whether a program's ability to access other necessary programs (like Zope using Python 2.3) requires identical permissions, or do programs just figure out where other programs they need reside? I did notice that part of the configure shell script explicitly shows the path to Python, however.
All the books I've read don't really address these issues in depth.
To make matters worse, Zope has its own definition of a user, but I think I understand that and that it's not related to the Unix definitons above.
|
OPCFW_CODE
|
EASY CoNET / EASY ONLINE
EASY Lyon / EASY coNET is a software for programming and monitoring the status of the Cofem Lyon, Compact Lyon and Zafir control panels. Since this unit allows control over a thousand points, you need an effective system for labeling and scheduling make it work with control panel easier, faster and more intuitive.
EASY CoNET is a support software for programming and monitoring the status of Lyon, Zafir and Compact Lyon control panels of Cofem. With control panels at the market that support over 1000 points, it is important to have efficient labelling and programming tools.
The EASY CoNET software is designed for two functions:
Configuration of control panel:
The EASY CoNET software (basic version) can be loaded in any PC (usually a laptop computer). It allows to prepare the information of the installation (programming number, points label, activation of relays, zones, etc) for downloading to control panel through USB connection between PC and control panel. In this way, it is easier to work in configuration of control panel comfortable anyplace, and only going to the place of control panel for downloading configuration and start up system. Furthermore, the EASY CoNET software make also easier the management and control of the configurations with Lyon, Zafir and Compact Lyon control panels.
Control panel management with PC:
The EASY CoNET software (extended version) allows ONLINE and real-time management of control panel with a PC, allowing to interact on it (monitor, disabled zones, put on test, activate the evacuation, etc), as well as showing all the incidents (warning lights, location maps, capability to disabled or reset a detector, a relay, etc).
• Software for programming and management of Lyon, Zafir and Compact Lyon control panels.
• Software can be installed in any PC (the minim requirements are described in the EASY CoNET manual).
Basic Version (for control panel configuration):
• Allows programming of control panel with a PC (usually a laptop computer) in a Windows environment, later connection to control panel and download the information on it.
• Connection with USB.
• Easy management of all configurations with Lyon, Zafir and Compact Lyon control panels.
• Avoids the control panel configuration in front of it.
• The control panel configuration can be prepared wherever.
• ONLINE management of control panel, with multiples possibilities of control in a easy Windows environment (monitor, put on test, location maps, disabled or reset relays, etc).
• Distances up to 1200 m between control panel and PC are accepted, using RS232/485 convertors.
• Allow use wiring and TCP/IP protocol in the installation.
|
OPCFW_CODE
|
Proof-of-work is described in the Bitcoin whitepaper. It is Satoshi Nakamoto’s key contribution to Bitcoin and differentiates it from preceding digital currencies. Proof-of-work demonstrates that a hard puzzle was solved. The puzzle consists of hashing the contents of a block and adding a nonce, until the returned hash starts with a defined number of zeros. The amount of zeros that is required determines the difficulty. Finding the right nonce to compute a hash with the right amount of leading zeros is computationally expensive, however verifying its result is comparitively easy. Every transaction on the blockchain has to be backed up by proof-of-work, before it is accepted. Let’s quickly refresh how hashing works and how it’s used in Bitcoin. A short Ruby script gives a simplified idea of how it works in practice.
The hashing algorithm SHA-256, which is used by Bitcoin, works by taking an input and returning a deterministic output that is unique to its input. A slight variation of the input will return a completely different output. Also, two different inputs should not generate the same output and the output is of fixed length, independent of the input. It’s infeasible to reverse the hash, so the output does not reveal the input. Hashing relies on math that takes advantage of the fact that multiplying prime numbers is easy, but it is very hard to find the factorials that result in the same output.
If we take the string
"hello world" and run it through a SHA-256 hash function, we get
$ echo -n hello world | shasum -a 256
You can run this code in your terminal or use an online hash generator to test this yourself.
By adding one exclamation mark to the same string the output is completely different.
"hello world!" -> 7509e5bda0c762d2bac7f90d758b5b2263fa01ccbc542ab5e3df163be08e6ca9
Hashing as proof-of-work
If we now wanted to find the nonce that combined with the string “hello world!” returns a hash output with two leading zeros, we have to try on average 16**2 times. The nonce in this case is a number added to the end of the string.
"hello world!0 -> 07e889f7afef842b21e0ed5bd0075fa59afe3e9f0b4bbd8bcc9a06c70b086e78
"hello world!1" -> 19a3bc0e649a694d06304c3ed122a20a5350673003302fafc5c5f292407a4d25
"hello world!2" -> 8d9c79068e59548c71fdab168cd67a7356c7a1d29ea6431c0aef3527216ca673
"hello world!3" -> 4b5a6a605df3ee03573c8a05e6a41f4740dc48a62c65563885740282400dbc2e
"hello world!4" -> d5f7845e425302f3439735991372134233cd2ca02e809813483ac9955f178062
"hello world!5" -> ec70092dd9bc002860baff4fb938c05741e32f735338842dc704a2b7363d3ebb
"hello world!49" -> 00501ed342b15e0f16e64b463725150ee1216c9237f396863940af1b064700ba
It took us 50 hashes to find the right nonce which, appended to the string “hello world!”, would generate a SHA-256 hash output with 2 leading 0s.
The difficulty exponentially increases relative to the 0s required in the output. To get an ouput with five leading zeros the nonce has to be 2030350, which translates to more than 2 million attemps. At the time of writing this, the Bitcoin proof-of-work algorithm requires a hash output with 19 leading zeros before a Block is accepted by the network and added to the Bitcoin blockchain: Blockexplorer.
While it’s hard to find the right nonce, it is easy to reverse the task and verify that the hash is correct, as the hash function just has to be run once.
"hello world!2030350" -> 000009b3111cb851956ed3ebda92b68074aa462b2ac3c82b469bbb36e4d2cc21
To change a block the work has to be redone. Since blocks are chained together by referencing the previous block’s hashed header, the further back a block in the chain, the harder it is to change, since all blocks after it would have to be changed as well. That’s why it’s sensible to wait for a transaction to be confirmed by ~6 blocks after it in order to be sure that the transaction cannot be reversed.
Nakamoto writes in the whitepaper:
Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it. If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes.
Therin lies the unique solution of the proof-of-work algorithm which secures consensus in a system where more than 50% of the CPU power is “controlled by nodes that are not cooperating to attack the network”.
Proof-of-work in Ruby
Below is a simplified implementation of proof-of-work in Ruby (inspired by Haseeb Qureshi’s lecture).
@required_number_of_zeros = difficulty
nonce = 0
until found_correct_nonce?(nonce, block_content)
nonce += 1
def found_correct_nonce?(nonce, block_content)
hash(block_content + nonce.to_s).start_with? required_number_of_zeros
worker = ProofOfWork.new(difficulty: "00000")
nonce = worker.find_nonce_for(block_content: "hello world!")
# => 2030350
The script let’s you define the diffifculty by specifying the amount of zeros the resulting hash should begin with. It then takes a string, which in our case is
"hello world!", but in a Blockchain will be the contents of the block header and the transactions of the block.
find_nonce_for function iterates over a nonce in a loop. The nonce is 0 in the beginning and in each iteration increases by 1, until
found_correct_nonce? returns true. It will return true when a nonce has been found which appended to the string
"hello world!" and then passed through the SHA256 hash function results in a hash with the defined diffculty, in our case five zeros. When the nonce is found, the loop stops and returns the nonce which is the number our proof-of-work algorithm sought. As we see above it is
2030350. You can run this code and adjust the difficulty to see how much longer it takes on average to find the right hash.
With the nonce and the string we can quickly verify that combined they result in the same right hash.
Digest::SHA256.hexdigest("hello world!" + "2030350")
# => "000009b3111cb851956ed3ebda92b68074aa462b2ac3c82b469bbb36e4d2cc21"
The difficulty of proof-of-work is highlighted when compared to the ease of verification, measured by how long it took to complete:
$ time ruby proof_of_work.rb
$ time ruby verify_proof_of_work.rb
On my machine it took 3.65 seconds to find the right nonce and only 0.05 seconds to verify it.
Less than 4 seconds doesn’t sound like much, but an increased diffulty quickly becomes computationally very expensive, to a level where additional hardware is required and energy to power this hardware. When mining popular Cryptocurrencies, finding the right nonce is a competitve race and who finds it first is awarded coins. Cryptocurrencies adjust the difficulty, i.e. the number of zeros match the Hash Power of the network allowing for a consistent creation of blocks. The Hash Power refers to the computation power of the network of all participating miners. Miners run proof-of-work and the nodes in the network verify it.
Proof-of-work has been critisized for its energy consumption and alternative solutions are being discussed, most notably Proof of Stake.
|
OPCFW_CODE
|
Object-oriented programming attempts to provide a model for programming based on objects.Object-oriented programming integrates code and data using the concept of an “object”. An object is an abstract data type with the addition of polymorphism andinheritance. An object
has both state (data) and behavior (code).
Objects sometimes correspond to things found in the real world. For example, a graphics program may have objects such as “circle,” “square,” “menu.” An online shopping system will have objects such as “shopping cart,” “customer,” and “product.” The shopping system will support behaviors such as “place order,” “make payment,” and “offer discount.”
Objects are designed in class hierarchies. For example, with the shopping system there might be high level classes such as “electronics product,” “kitchen product,” and “book.” There may be further refinements for example under “electronic products”: “CD Player,” “DVD player,” etc. These classes and subclasses correspond to sets and subsets in mathematical logic. Rather than utilizing about database tables and programming subroutines, the developer utilizes objects the user may be more familiar with: objects from their application domain.
Object orientation uses encapsulation and information hiding. Object-orientation essentially merges abstract data types with structured programming and divides systems into modular objects which own their own data and are responsible for their own behavior. This feature is known as encapsulation. With encapsulation, the data for two objects are divided so that changes to one object cannot affect the other. Note that all this relies on the various languages being used appropriately, which, of course, is never certain. Object-orientation is not a software silver bullet.
The object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the system. Instead, the data is accessed by calling specially written functions, called methods, which are bundled with the data. These act as the intermediaries for retrieving or modifying the data they control. The programming construct that combines data with a set of methods for accessing and managing that data is called an object. The practice of using subroutines to examine or modify certain kinds of data was also used in non-OOP modular programming, well before the widespread use of object-oriented programming.
Defining software as modular components that support inheritance is meant to make it easy both to re-use existing components and to extend components as needed by defining new subclasses with specialized behaviors. This goal of being easy to both maintain and reuse is known in the object-oriented paradigm as the “open closed principle.” A module is open if it supports extension (e.g. can easily modify behavior, add new properties, provide default values, etc.). A module is closed if it has a well defined stable interface that all other modules must use and that limits the interaction and potential errors that can be introduced into one module by changes in another.
|Sessions||Classes Allotted||Time for each class|
|Session 1||2 class||40 min (theory)|
|Session 2||6 class||40 min (theory)|
|Session 3||7 class||80 min (theory + lab)|
|Session 4||7 class||80 min (theory + lab)|
|Session 5||4 class||80 min (lab)|
|Session 6||2 class||40 min (placement coaching)|
- We providing course with study material.
- We providing 100 % job assistance for related course.
- We providing course explanation with well-educated faculty.
- We providing corporate IT training of appropriate Course.
- We providing certification of particular course which will useful for your future.
- We providing peaceful environment for class.
- During course, providing company side interaction.
- After course we provide chance to do internship project of particular course in corporate companies.
|
OPCFW_CODE
|
from vector2 import Vector2
class Player:
def __init__(self, playerNumber, pygame, display):
self.playerNumber = playerNumber
self.location = Vector2()
self.pygame = pygame
self.display = display
self.axe = 0
self.pickaxe = 0
self.hand = 0
def setLocation(self, location):
self.location = location
def setX(self, x):
self.location.x = x
def setY(self, y):
self.location.y = y
def moveX(self, x):
self.location.x += x
def moveY(self, y):
self.location.y += y
def setAxe(self, level):
self.axe = level
def setPickaxe(self, level):
self.pickaxe = level
def setHand(self, tool):
self.hand = tool
def getLocation(self):
return self.location.x, self.location.y
def getX(self):
return self.location.x
def getY(self):
return self.location.y
def getAxe(self):
return self.axe
def getPickaxe(self):
return self.pickaxe
def getHand(self):
return self.hand
def getPlayerNumber(self):
return self.playerNumber
def isOnGround(self):
return self.display.get_at((self.getX(), self.getY() + 65)) != (0, 0, 0, 255) or \
self.display.get_at((self.getX() + 64, self.getY() + 65)) != (0, 0, 0, 255)
def init(self):
for x in range(0, 1100, 64):
for y in range(0, 700, 64):
# print("X=" + str(x) + " && Y=" + str(y) + ":", display.get_at((x, y)))
try:
if self.display.get_at((x, y)) == (0, 0, 0, 255) and \
self.display.get_at((x, y + 64)) == (246, 195, 143, 255):
self.setLocation(Vector2(x, y))
except:
pass
def update(self):
if self.pygame.key.get_pressed()[self.pygame.K_d]:
self.moveX(3)
if self.pygame.key.get_pressed()[self.pygame.K_a]:
self.moveX(-3)
if self.pygame.key.get_pressed()[self.pygame.K_w] and self.isOnGround():
self.moveY(-128)
if self.pygame.key.get_pressed()[self.pygame.K_s]:
self.moveY(3)
if not self.isOnGround():
self.moveY(2)
|
STACK_EDU
|
MySQL next row in sequence
I have a query like this:
SELECT id, name, town, street, number, number_addition, telephone
FROM clients
WHERE ((postalcode >= 'AABB' AND postalcode <= 'AACC') OR (postalcode >= 'DDEE' AND postalcode <= 'DDFF'))
ORDER BY town ASC, street ASC, number ASC, number_addition ASC
LIMIT 1
This way I can get first client.
Then I want to get next client (let's say I know that my current client has ID 58 and I want to get next client in the sequence - I'll have one client that's tagged as current and I want to get next/previous) and I already know ID of first client, can you please give me a hint how to achieve this? (With the same ordering)
I found this http://www.artfulsoftware.com/infotree/queries.php#75 but I dont know how to transform these examples to the command when I need to order by multiple columns.
Thanks for your time.
can't you make LIMIT 3 and display the one in the middle?
SELECT
c1.id, c1.name, c1.town, c1.street, c1.number, c1.number_addition, c1.telephone
FROM clients c1
INNER JOIN clients c2
ON (c1.town,c1.street,c1.number,c1.number_addition) >
(c2.town,c2.street,c2.number,c2.number_addition)
WHERE ((c1.postalcode BETWEEN 'AABB' AND 'AACC')
OR (c1.postalcode BETWEEN 'DDEE' AND 'DDFF'))
AND c2.id = '$previous_id'
ORDER BY c1.town, c1.street, c1.number, c1.number_addition
LIMIT 1 -- OFFSET 0
Here it is assumed that id is the primary key for this table.
You need to add the id to the select list, or you will not know the id to put in $previous_id for the next one in line.
Dear drive-by-downvoter, please be so kind as to inform me of the cause of your displeasure with this answer?
I'm not sure if I described the problem well.
Let's say I'm in the middle of client listing - current client has ID eg. 1234 and I want to get next client in sequence (with ordering I described in example SQL) and next client can have higher / lower ID (ordering is based on client's address).
If I use LIMIT or OFFSET to do this I will have to somehow know on which position in the table I'm currently on - I'll know just ID.
To be clear, I'm not the "drive-by-downvoter" I really appreciate your help.
@verysadmysqldeveloper, Aha, now I understand. Note that the (x,y) > (x1,y1) is shorthand for ((x> x1) OR (x=x1 AND (y> y1))
Thanks! That's a lot better. But If I inderstand it right if two clients are from same city, street, have same house number and only number_addition differs them it won't this client becase there's operator >?
If everything is the same except for the number_addition this will work correctly. Like I said, the (a,b,c,d,e,f) > (a1,b1,c1,d1,e1,f1) is shorthand for a long list of a>a1 OR (a=a1 AND b> b1) OR (a=a1 AND a=b1 AND c>c1) or (.....)
You can use the two values of LIMIT X,Y, where X is the offset and Y is the number of rows. See this for info.
That will, however, make your list order every time you query for just one row. There are different ways to do this.
What you could do is get a good portion of that list, maybe as much as is practical for you (say maybe 20, it depends). Keep the result in an array in your program, and itterate through those. If you need more, just query again with an offset. The same way a forum will show only a certain quantity of posts in each page on a search.
Another approach is to get all the client IDs you need, keep that in an array, and query for each one at a time.
That's a slow solution, as offset increases, limit gets slower.
Yes, I also considered that before but the problem is I'll hav to also remember last viewed client by the user and somehow apply his ID to the query. That means:
SELECT next FROM table WHERE (ORDERING STUFF) AND CURRENT = 9876
@very: No, if you keep the list of IDs in reference, then you just need to remember the last ID viewed, find its position in the list, take the one after, and lookup its information, no sorting.
|
STACK_EXCHANGE
|
Use differnet Lottie animations Without making http requests
I'm working on a small game related to dice. I have 6 Lottie animations, each for a die face.
When the user click 'roll' a random die face is up. Hence a random Lottie animation.
The problem is each animation file (json file) is 320 kb. Whenever the user clicks roll the website makes an http request that could take up to 300-400ms.
I'm looking for a way to use these animations without http requests. In other words whenever i click roll the animation renders smoothly. Also i prefer the the solution would be PWA friendly for future plans.
You could do the requests on page-load, and store the results in LocalStorage, for usage whenever you need it.
localStorage.setItem('dice-face-1', JSON.stringify(LottieContent));
const diceFace1 = JSON.parse(localStorage.getItem('dice-face-1'));
localStorage.setItem('dice-face-2', ...
LocalStorage documentation on MDN
Further notes: This would have the die faces "lottie-data" stored for the users next visit to your game as well :)
And you then need to be able to invalidate the cached data as well, so when you update your lottie-data the user also get the data that you intent.
const newVersion = 1.03;
localStorage.setItem('lottie-data-version', 1.02);
const lottieDataVersion = localStorage.getItem('lottie-data-version');
if(lottieDataVersion < newVersion){
localStorage.removeItem('dice-face-1');
...(invalidate other data)
}
I thought of this. But i didn't figure how to use the actual json to render the animation since the Lottie API requires a path to be provided. So how can i use the Json to render the animation?
I've had some luck using the .loadAnimation() method, only supplying a JSON string.
An awful workaround could be to supply an empty json-file in the initial path-definition, and then afterward use the .loadAnimation() to inject the correct data.
If that doesn't get your luck going, you might need to look into the depths of using the "LottieCompositionFactory" that this article goes through(but i do not have experience in this method):
Medium, Loading Lottie’s animation from local storage
I'll take a look in the "LottieCompositionFactory". Could you please provide me a sample of the method you've luck with? Also thanks!
I used animationData property instead of path and i passed a JSON object and it WORKED! can't thank you enough!
That sounds awesome!! You're very welcome, and i'm glad i could point you in the somewhat right direction :)
|
STACK_EXCHANGE
|
Strange memory use in dplyr mutate using paste
So my actual dataset is 16 million rows and confidential, but I can illustrate what's happening fairly easily. I don't understand this behaviour at all, it flies in the face of everything I've read, or at least I think it does.
So here's a dataframe, with strings and dates (the real one has more columns and more rows)
library(tidyverse)
test = data.frame("a" = letters,
"b" = seq.Date(as.Date("2018-01-01"),
as.Date("2018-01-26"), "days")
)
I want to produce a third column, pasting together the first two. I do it like this:
finalTest = test %>%
mutate(c = paste(a, b))
If I do this, with 16 million rows, it goes from about 2GB RAM used to nearly 8GB and the process gets killed by the server (which has 8GB of RAM).
However, if I split the dataset in two, paste the columns, and then rbind, it's fine, even though by doing so I'm creating unnecessary objects (the whole dataset is only about 700MB, so it does make sense that the objects fit in RAM).
test1 = test %>%
filter(row_number() <= floor(n()/2)) %>%
mutate(c = paste(a, b))
test2 = test %>%
filter(row_number() > floor(n()/2)) %>%
mutate(c = paste(a, b))
finalTest2 = rbind(test1, test2)
This is fine. It seems like the objects fit in memory, but not when you're operating on them. But what's happening that is so memory intensive?
I do not understand at all. Is this expected behaviour? Is it unique to paste? Pasting with strings and dates? Something else?
paste allocates twice so is expensive for longer inputs
I've been through it too... If you start having 16M Rows in your data frames, I suggest to really not bother optimising memory usage with dplyr, just go for data.table. So much faster, memory efficient, although having complex syntax but there are workarounds (below).
Just be sure you understand that data.table memory management is generally speaking by reference unlike dplyr who makes copies ( that's a reason for the performance differences).
Since syntax is IMHO difficult with data.table and can be a bit hard at the beginning, you can use dtplyr package to translate your dplyr code to data.table ( use show_query function) or check this webpage :
https://atrebas.github.io/post/2019-03-03-datatable-dplyr/
I find it very usefull for people familiar with dplyr but not data.table.
If you really want to stick to dplyr, be sure that the data.frame you are using was not grouped somewhere before in your code, this involves surprising behaviours sometimes if you forget about it (use ungroup()).
|
STACK_EXCHANGE
|
Enter solitaire or tic-tac-toe in Google, and it will directly show you the game you want to play in the search results. Each game allows you to choose the difficulty level, and Tic Tac Toe can also play against other players next to you.
Also know that Google Tic Tac Toe is impossible?
Tic Tac Toe is a very easy to perfect game, and it can even make it never lost. Not only that, it is so simple, people can remember how to play it perfectly without having to play. Impossible means that it is actually impossible to win the game.
Then, the question is, is there a way to win the Tic Tac Toe game every time? When you enter the competition for the first time, there is a simple strategy to win the Tic Tac Toe game: put your “X” in any corner. As long as your opponent does not put their first “O” in the middle box, this move will bring you to the winner’s circle almost every time. It may be harder to win, but it can happen.
In this way, can we play Tic Tac Toe together?
By using the 3×3 board in Tic Tac Toe, two players can play with each other. One player chooses no chess, the other player uses the cross, and the first player aligns 3 identical symbols (no chess or cross) (horizontal, vertical or diagonal) to win.
How do you guarantee to win in Tic Tac Toe?
Play the first X in the corner. The most experienced Tic Tac Toe player places the first “X” in the corner when starting the game. This gives the opponent the greatest opportunity to make mistakes. If your opponent responds by placing an O anywhere outside the center, you can guarantee a win.
Other questions you may be asking:
Is it possible to win Tic Tac Toe going second?
2 answers. As you know, Tic Tac Toe is an excellent game, and it was finally tied for the best gameplay. Moreover, even for children, no initiative is too short to truly become a second player. In the case of second place, there is no way to force a victory without the 2 errors of the first player.
Why is it called Tic Tac Toe?
Early variations of Tic Tac Toe were performed in the Roman Empire in the first century BC. “Tic Tac Toe” can also be derived from “tick-tack”, which is the name of the old version of Backgammon, first described in 1558. The United States renamed “noughts and crosss” to “tic-tac-toe”, which occurred in the 20th century.
How do you beat Tic Tac Toe on the brain?
Level 95: To win this tic-tac-toe game, drag the blue circle from the question to a square, then click the square aligned with it to add another blue circle and win.
What is the best first move in tic tac toe?
Tic Tac Toe has been resolved. The best first step is to go to the corner. As always, there is a related xkcd. The first step can be done without sacrificing the game.
How many moves are there in tic tac toe?
In addition to symmetry, there is a Tic Tac Toe game of 255168. The first player won the 131,184 game, the second player won the 77904 game, and the remaining 46,080 was drawn. As already pointed out, in the best game, all matches will result in a tie.
How do you beat Google?
10 ways to beat the Google algorithm
Identify link studs and link blind spots. mobile. Diversified digital marketing strategies. Generate online PR and media promotion. Clean up your links. Consider using AdWords alternatives. Focus on authoritative content. Pay attention to YouTube.
How do you beat Tic Tac Toe on the impossible test?
The answer is simple, but at the same time clever: to win this game, you must use your mouse to grab the circle that encloses the question mark and drag it to the blank box in the middle of the row. In this way, you will get 3 O in a row and win the Tic Tac Toe game, and then move on to the next question.
Can you beat the impossible quiz?
Unless you have not used all seven “skips” on question 110 (the last question), it is impossible to beat the game. Write down the code in question 50. Write down these numbers and prepare to type them in question 108.
Where can I play snake?
In Google Maps, press the menu button at the top left of the screen, and scroll down to “Play Snake” in the menu. This will load the game and you can choose between different parts of the world to play Snake.
Why do some crosses have a circle?
Some people say that this circle represents the Roman sun god Invictus, hence the name “Celtic Sun Cross”. Others say it represents the aura of Jesus Christ. Others just saw it as a remnant of the roots of pagans, and used it as a symbol of the sun.
- How can I play tic tac toe on my iPhone?
- How do I play Atari Breakout on my phone?
- Can you play old Google Doodle games?
- How do you play Google Gnome Games?
- How do you play bowling king with friends?
- How do I change my player attributes in 2k18?
- Can you pause Minecraft?
- What is a charades game?
- How do you play the game Baggo?
- How do you play card jitsu in real life?
|
OPCFW_CODE
|
All of us do at some point and devs, co-located or working remotely, are no exception. It can be difficult to maintain a good process, particularly if you’re working from home. There are so many distractions to take your attention and sometimes, you just don’t feel that your problem-solving skills are operating at 100%.
You don’t want to miss deadlines and let down clients or team members, so what strategies are devs using to be more productive?
#1. Manage Distractions
There are any number of distractions vying to get your attention during your work day. Productive devs find a way to either mitigate or eliminate the distraction. We’ve compiled a list of some of the most common distractions along with possible solutions:
|Each new email that comes in during the day.
|Many devs favor the idea of setting aside specific times each day for checking email. Stay out of your inbox outside of these times and deal with anything urgent promptly when you do check.
|Social media, your favorite blogs and the Internet in general.
|These are huge distractions for most people these days and may require the “big guns” to minimize their impact.
|Noise or activity around you.
|Invest in a good-quality pair of noise-cancelling headphones and turn up “focus” music.
|Colleagues, friends or family interrupting you.
|Be firm about setting “office hours”, particularly if you work from home. Follow a rule that you can’t be disturbed with non-work matters between certain times.
#2. Use Good Tools
There are a number of tools for better productivity as a dev, many depending on which language you need. The caveat here is that you don’t want to “over tool” yourself, to the point where managing tools is another job that cuts into your productive time.
You may also find value in a tool that will centralize information from all your other tools. Rindle provides this by integrating many of the tools you already use into one view and a kanban-style management flow.
Here are some of the productivity tools popular with devs that you might want to use:
- GitHub - This is great for version control and collaboration on code. They also have a bunch of open-source tools aimed at productivity, which Infoworld outlined here.
- Jira - Project and issue tracker from Atlassian. This is popular among dev teams and has simple Scrum and Kanban boards.
- Toggl - Sometimes you have to track time, especially if you want to know whether something is truly profitable.
- Trello - Kanban-style project management tool that allows for easy collaboration on projects.
- Bitbucket - This is an alternative to GitHub that works very well, particularly if you use Jira for project management (both are from Atlassian).
The tools listed here are some that we have found to be the most popular among devs, but we’d be interested to hear of your favorites too.
#3. Set Deadlines
Most projects you undertake will have some kind of “hard” deadline, but you’ve built some padding into it, right? If you already know how long a project should take you, try setting your own deadlines within it. Creating a sense of urgency for yourself, even with an artificial deadline can help to hone your focus.
Many people swear by the idea of rewarding themselves for hitting an early deadline, so set some limits with yourself and some kind of meaningful reward. Sometimes taking a break or finishing early to meet friends for drinks may be enough to stoke the fires of productivity.
#4. Take Breaks
Many devs fall into a trap of staying glued to their screen for hours, foregoing any kind of break. It’s a fact of the human condition that no one can be productive over long periods of time - our attention will only hold for so long, then the implications for your code may be scary.
The Atlantic posted about studies by social scientists into the work patterns of productive and less-productive people. One of those revealed that highly productive people tended to work for 52 minutes before taking a 17 minute break. Doing this helped improve their focus, particularly if they left their computers and did other things (try taking a walk - exercise is another proven factor in productivity!).
There’s probably not one ideal situation to suit all, but the point that comes from all of these studies is that breaks are necessary. Some of the least productive people sit in offices for 9 hours each day, while others get more done in 4.
Short bursts of activity followed by a break are leading to better-quality work and better ability to achieve more over the day. Sweden has even introduced 6 hour work days and is seeing the benefits. Employees feel happier and more energized, quality of work is improved and sick days are down. (Hmm, so if “move to Sweden” doesn’t work for you, at least taking breaks might help?).
#5. Chunk Tasks and Set Mini Goals
Try chunking your tasks into “like” groups as a simple way to get through them more quickly without having to “change channels.” It also helps if you keep your tasks organized and prioritized, perhaps using a good task manager tool.
At the same time, setting “mini goals” that are critical to getting to your overall goal at the end can work well for productivity. It’s difficult to keep grinding away without some sense of achievement, which is why breaking big tasks into smaller, seemingly more achievable goals tends to work better.
#6. Think Twice Before You Write
From the “measure twice, cut once” school of thinking among carpenters, devs can save themselves potentially inefficient rework by taking their time to come up with an optimal coding solution first.
Every dev has his or her own preference for workflow, but many we’ve spoken to like to whiteboard what they’re doing prior to beginning to actually write the code, particularly for complex tasks. It sounds like extra work, but really it’s an opportunity to “measure twice, cut once” so that they’re not trying to go back and correct work later.
The Zen of Python goes some way to explaining this thought process:
Another point is that your urge to optimize may be premature. Jonathan Blow is a prolific game developer whose output shatters average rates of work for devs. In this talk on YouTube, he recommends that you optimize only the code that needs it and doing so once you’ve finished. Otherwise, you run the risk of increasing complexity and undermining your end goal.
#7. Automate What You Can
How many repetitive tasks can you automate? Anything simple that you repeat on a daily basis should be top of your list for automating. Some examples might include: text manipulation and log mining, refactoring and testing.
There are always going to be distractions or unwieldy tasks that impinge on your ability to be productive, but devs who have created a good system for themselves for maximizing productivity tend to be the ones who are killing it.
Use a good combination of tools without overdoing it, take breaks, automate, chunk tasks and set artificial deadlines for yourself. Don’t be tempted to “optimize” code too early - it’s often rendered premature because you could increase complexity unnecessarily.
What are your thoughts? How do you increase your own productivity?
|
OPCFW_CODE
|
Warning! Do NOT Download Without a VPN!
Your IP Address is . Location is
Your Internet Provider and Government can track your download activities! Hide your IP ADDRESS with a VPN!
We strongly recommend using a VPN service to anonymize your torrent downloads. It's FREE!START YOUR FREE TRIAL NOW!
Adobe Master Collection CC 2020 V16 06 2020 (x64)
As I hope you know, Adobe Master Collection CC 2020 does not exist in nature, Adobe has never released it. But, nevertheless, he is in front of you! And it is assembled on the basis of a modern installer, manufactured by Adobe, the transition to which was made possible through joint efforts, both by mine and by many of the famous PainteR. We both did our best to bring this package to life.Adobe Master Collection CC 2020 is a collection of applications from the Creative Cloud 2020 line and a number of junior version programs combined by a single installer with the ability to select the installation path and the language of the installed programs.In terms of functionality, everything is very similar to the well-proven Adobe Master Collection CS6 in the past. Only here, the installer interface has changed, the current package includes significantly more programs than its namesake Creative Suite 6, and the versions of the programs themselves are mostly fresher.
VISITOR COMMENTS (0 comments)
|Adobe Master Collection CC 2020 v16 06 2020 (x64)||15.9 GB||Folder|
|[TGx]Downloaded from torrentgalaxy.to .txt||585 B||Text File|
|Adobe Master Collection CC 2020 v16.06.2020 (x64)/ISO/Adobe.Master.Collection.16.06.2020.iso||15.9 GB||DVD Disc Image|
|Adobe Master Collection CC 2020 v16.06.2020 (x64)/ISO/Adobe.Master.Collection.16.06.2020.md4||74 B|
|Adobe Master Collection CC 2020 v16.06.2020 (x64)/ISO/Adobe.Master.Collection.16.06.2020.md5||74 B|
|Adobe Master Collection CC 2020 v16.06.2020 (x64)/ISO/Adobe.Master.Collection.16.06.2020.sfv||49 B||Data Verification File|
|Adobe Master Collection CC 2020 v16.06.2020 (x64)/ISO/Adobe.Master.Collection.16.06.2020.sha1||82 B|
|Adobe Master Collection CC 2020 v16.06.2020 (x64)/Read Me.txt||632 B||Text File|
|Downloaded from Demonoid - www.dnoid.to.txt||56 B||Text File|
|Downloaded from HaxNode.CoM.txt||87 B||Text File|
|
OPCFW_CODE
|
I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
|
OPCFW_CODE
|
It’s not for nothing that people who program a lot spend a lot of time talking about design patterns: Design patterns are basic program structures that have proven their worth over time. And one of the most commonly-used design patterns in LabVIEW is the producer/consumer loop. You will often hear it recommended on the user forum, and NI’s training courses spend a lot of time teaching it and using it.
The basic idea behind the pattern is simple and elegant. You have one loop that does nothing but acquire the required data (the producer) and a second loop that processes the data (the consumer). To pass the data from the acquisition task to the processing task, the pattern uses some sort of asynchronous communications technique. Among the many advantages of this approach is that it deserializes the required operations and allows both tasks (acquisition and processing) to proceed in parallel.
Now, the implementation of this design pattern that ships with LabVIEW uses a queue to pass the data between the two loops. Did you ever stop to ask yourself why? I did recently, so I decided to look into it. As it turns out, the producer-consumer pattern is very common in all languages — and they always use queues. Now if you are working in an unsophisticated language like C or any of its progeny, that choice makes sense. Remember that in those languages, the only real mechanism you have for passing data is a shared memory buffer, and given that you are going to have to write all the basic functionality yourself, you might as well do something that is going to be easy to implement, like a queue.
However, LabVIEW doesn’t have the same limitations, so does it still make sense to implement this design pattern using queues? I would assert that the answer is a resounding “No” — which is really the point that the title of this post is making: While queues certainly work, they are not the technique that a LabVIEW developer who is looking at the complete problem would pick unless of course they were trying to blindly emulate another language.
The Whole Problem
The classic use case for this design pattern includes a consumer loop that does nothing but process data. Unfortunately, in real-world applications this assumption is hardly ever true. You see, when you consider all the other things that a program has to be able to do — like maintain a user interface, or start and stop itself in a systematic manner — much of that functionality ends up being in the consumer loop as well. The only other alternative is to put this additional logic in the producer loop where, for example, asynchronous inputs from the operator could potentially interfere with your time-sensitive acquisition process. So if the GUI is going to be handled by the consumer loop, we have a couple questions to answer.
Question: What is the most efficient way of managing a user interface?
Answer: Control events
The other option of course is to implement some sort of polling scheme, which is at the very least, extremely inefficient. While it is true that you could create a third separate process just to handle the GUI, you are still left with the problem of how to communicate control inputs to the consumer loop — and perhaps the producer too. So let’s just stick with events.
Question: Do queues and control events play well together?
Answer: Not so much, no…
The problem is that while queues and events are similar in that they are both ways of communicating that something happened by passing some data associated with that happening, they really operate in different worlds. Queues can’t tell when events occur and events can’t trigger on the changes in queue status. Although there are ways of making them work together, the “solutions” can have troublesome side effects. You could have an event structure check for events when the dequeue operation terminates with a timeout, but then you run the risk of the GUI locking up if data is put into the queue so fast it never has the chance to time out. Likewise, you could put the dequeue operation into a timeout event case, but now you’re back to polling for data — the very situation you are wanting to avoid.
Thankfully, there is a simple solution to the problem: All you have to do is lose the queue…
The alternative is to use something that integrates well with control events: user-defined events (UDEs). “But wait!”, you protest, “How can a UDE be used to transfer data like this? Isn’t the queue the thing that makes the producer-consumer pattern work?” Well, yes. But if you promise to keep it under your hat, I can tell you a little secret: Events have a queue too.
The following block diagram shows a basic implementation of this technique that mirrors the queue-based pattern that ships with LabVIEW.
Note that in addition to the change to events, I have tweaked the pattern a bit so you can see its operation a bit better:
- The Boolean value that stands in for the logic that determines whether to fire the event, is now a control so you can test the pattern operation.
- The data transferred is no longer a constant so you can see it change when new data is transferred.
- Because the consumer loop is now event-driven, the technique for shutting down the program is actually usable (though not necessarily optimum) for deliverable code.
For those who are only familiar with control events, the user-defined event is akin to what back in the day was called a software interrupt. The first function to execute in the diagram creates the event and defines its datatype. The datatype can be anything, but the structure shown is a good one to follow due to a naming convention that LabVIEW implements. The label of the outer cluster is used as the name of the event — in this case acquire data. Anything inside the loop becomes the data that is passed through the event. This event, as defined, will pass a single string value called input data.
The resulting event reference goes to two places: the producer loop so it can fire the event when it needs to, and a node that creates an event registration. The event registration is the value that allows an event structure to respond to a particular event. It is very important that the event registration wire only have two ends (i.e. no branches). Trying to send a single registration to multiple places can result in very strange behavior.
Once this initialization is done, operation is not terribly different from that of the queue-based version. Except that it will shut down properly, of course. When the stop button is pressed, the value change event for the button causes both loops to stop. After the consumer loop stops, the logic first unregisters and then destroys the acquire data UDE.
And that, for now at least, is all there is to it. But you might be wondering, “Is it worth taking the time to implement the design pattern in this way just so the stop button works?” Well, let’s say for now that the stop button working is a foretaste of some good stuff to come, and we will be back to this pattern, but first I need to talk about a few other things.
Until next time…
|
OPCFW_CODE
|
- 微博 QQ QQ空间 贴吧
1 .State of the Art Natural Language Processing at Scale Alex Thomas David Talby Data Scientist @ Indeed CTO @ Pacific AI #DD4SAIS
2 . CONTENTS ü NLU REAL-WORLD EXAMPLES ü DOCUMENT CLASSIFICATION - WALKTHROUGH ü STATE OF THE ART NLU IN HEALTHCARE ü TRAIN YOUR OWN DEEP LEARNING NLU MODELS
3 . INTRODUCING SPARK NLP • Industrial Grade NLP for the Spark ecosystem • Design Goals: 1. Performance & Scale 2. Frictionless Reuse 3. Enterprise Grade • Built on top of the Spark ML API’s • Apache 2.0 licensed, with active development & support
4 .NATIVE SPARK EXTENSION High Performance Natural Language Understanding at Scale Part of Speech Tagger Topic Modeling Named Entity Recognition Word2Vec Sentiment Analysis TF-IDF Spell Checker String distance calculation Tokenizer N-grams calculation Stemmer Stop word removal Lemmatizer Train/Test & Cross-Validate Entity Extraction Ensembles Spark ML API (Pipeline, Transformer, Estimator) Spark SQL API (DataFrame, Catalyst Optimizer) Spark Core API (RDD’s, Project Tungsten) Data Sources API
5 . FRICTIONLESS REUSE pipeline = pyspark.ml.Pipeline(stages=[ document_assembler, tokenizer, stemmer, Spark NLP annotators normalizer, stopword_remover, tf, Spark ML featurizers idf, lda]) Spark ML LDA implementation Single execution plan for topic_model = pipeline.fit(df) the given data frame
6 .Case study: Demand Forecasting of Admissions from ED Features from Structured Data Reason for visit Current wait time Age Number of orders • How many patients will be admitted today? Gender Admit in past 30 days • Data Source: EPIC Clarity data Vital signs Type of insurance
7 .Case study: Demand Forecasting of Admission from ED Features from Natural Language Text • A majority of the rich relevant content lies in unstructured notes that are contributed by doctors and nurses from patient interactions. • Data Source: Emergency Department Triage notes and other ED notes Type of Pain Symptoms Intensity of Pain Onset of symptoms Body part of region Attempted home remedy ML with NLP ML with structured data Accuracy Baseline: Human manual prediction
8 . Risk prediction Case Study: Detecting Sepsis “Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection.”
9 .Cohort selection Case Study: Oncology “Using the combination of structured and unstructured data, 8324 patients were identified as having advanced NSCLC. Of these patients, only 2472 were also in the cohort generated using structured data only. Further, 1090 patients who should have been excluded based on additional data, would be included in the structured data only cohort.”
10 . CODE WALKTHROUGH: DOCUMENT CLASSIFICATION • A combined NLP & ML Pipeline • Word embeddings as features • Training your own custom NLP models github.com/melcutz/nlu_tutorial
11 . Tokenizer Normalizer Different Vocabulary Lemmatizer Fact Extraction Part of Speech Tagger Spell Checker Different Grammar Coreference Resolution Dependency Parser Sentence Splitting Negation Detection Named Entity Recognition Sentiment Analysis Different Context Intent Classification Summarization Word Embeddings Emotion Detection Question Answering Relevance Ranking Different Meaning Best Next Action Translation Different Language Models
12 .Healthcare Extensions High Performance Natural Language Understanding at Scale Part of Speech Tagger Topic Modeling Named Entity Recognition Word2Vec Sentiment Analysis TF-IDF Spell Checker String distance calculation Tokenizer N-grams calculation com.johnsnowlabs.nlp.clinica Stemmer Stop word removal data.johnsnowlabs.com/healt l.* h Lemmatizer Train/Test & Cross-Validate Healthcare specific Entity Extraction Ensembles 1,800+ Expert curated, NLP annotators for clean, linked, enriched Spark in Scala, Java & always up to date or Python: Spark ML API (Pipeline, Transformer, Estimator) data: • Entity Recognition Spark SQL API (DataFrame, Catalyst Optimizer) • Terminology • Value Extraction • Providers • Word Embeddings Spark Core API (RDD’s, Project Tungsten) • Demographics • Assertion Status • Clinical Guidelines • Sentiment Analysis Data Sources API • Genes • Spell Checking, … • Measures, …
13 .Named Entity Recognition
14 . Deep Learning for NER F-Score Dataset Task 85.81% 2010 i2b2 Medical concept extraction 92.29% 2012 i2b2 Clinical event detection 94.37% 2014 i2b2 De-identification “Entity Recognition from Clinical Texts via Recurrent Neural Network”. Liu et al., BMC Medical Informatics & Decision Making, July 2017.
15 .Entity Resolution
16 . Deep Learning for Entity Resolution F-Score Dataset Task ShARe / 90.30% CLEF Disease & problem norm. 92.29% NCBI Disease norm. in literature “CNN-based ranking for biomedical entity normalization”. Li et al., BMC Bioinformatics, October 2017.
17 . Assertion Status Detection Prescribing sick days due to diagnosis of influenza. Positive Jane complains about flu-like symptoms. Speculative Jane’s RIDT came back clean. Negative Jane is at risk for flu if she’s not vaccinated. Conditional Jane’s older brother had the flu last month. Family history Jane had a severe case of flu last year. Patient history
18 .Deep Learning for Assertion Status Detection Dataset Metric 94.17% Mirco-averaged F1 4th i2b2/VA 79.76% Marco-averaged F1 “Improving Classification of Medical Assertions in Clinical Notes“ Kim et al., In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011.
19 . USING SPARK NLP • Homepage: https://nlp.johnsnowlabs.com – Getting Started, Documentation, Examples, Videos, Blogs – Join the Slack Community • GitHub: https://github.com/johnsnowlabs/spark-nlp – Open Issues & Feature Requests – Contribute! • The library has Scala and Python 2 & 3 API’s • Get directly from maven-central or spark-packages • Tested on all Spark 2.x versions
20 . THANK YOU! firstname.lastname@example.org email@example.com in/alnith/ in/davidtalby @davidtalby #DD4SAIS 20
|
OPCFW_CODE
|
Adding extra options to right-click menu
Are there any more options that can be added to the right click menu in Ubuntu? e.g when right clicking in nautilus or desktop?
One way you can do this by adding a PPA with 'Nautilus Actions Extra' described here on OMG!Ubuntu: http://www.omgubuntu.co.uk/2011/12/how-to-add-actions-emblem-support-back-to-nautilus-in-ubuntu-11-10/
Appreciate the help.
The first thing you need to do is install the Nautilus Actions application:
sudo apt-get install nautilus-actions
After it is installed, press the super(windows) key then search for Nautilus Actions you will find the icon click on it then you should now see the Nautilus Actions main screen
Click the Add button and you should see the Add a New Action screen
Fill out the Menu Item & Action properties with whatever you would like in your right click menu. Above, you can see I am using VLC as an example.
Next, click on the Conditions tab.
Under the Conditions tab you need to make sure that Both is selected under “Appears if selections contains”.
Next, click on Advanced Conditions
First, uncheck the “File / Local Files” box. Second, in order for the menu items you add to appear every time you right click, you will need to add a blank entry under the Advanced Conditions. To do this, click on the + and erase “new-scheme” and “new-scheme description” so that both entries are blank.
Click OK.
You now have added your first right click menu item. In order for the item to appear on your right click menu, you need to restart the nautilus daemon.
killall -HUP nautilus
Now you should be able to see the VLC media player on the right click menu.
To continue adding more items to the menu, repeat steps 3 through 8 until you are satisfied.
I had the same problem and I was not able to run all the new and old packages of filemanager-actions,nautilus-action, etc to run with Ubuntu 23.10
So my solution at the end was to install nautilus-python and write my own extension nautilus-poem. The extension works with a configuration file so that you need only to install once and can configure multiple actions and conditions:
- label: Right click for submenu
tip: Click me now
subitems:
- label: 2nd level right, nest me more?
- label: Flatten selected folder
conditions:
- "directory_count > 0"
click: /do-something.sh {POEM_FILES}
Since I started development some days ago it has some limitations, but feel free to contribute or contact me in case you need something implemented
|
STACK_EXCHANGE
|
Status: Pending Further Research
This Problem has been discussed with supoort in https://help.liferay.com/hc/en-us/requests/52488?page=1.
The issue is that images and links refering to urls from document & media are not updated in the webcontent if you change/edit the document in the document&media library. Normally the timestamp should be updated so the browser loads the updated media even if cached.
LPS-140954 the solution is to replace the timestamp with a TransformerListener. But that doesn't fix everything. If the layout is cached in Liferay this listener isn't called and the timestamp stays the same/old.
The support asked me to open a ticket here:
'Regarding the cache problem, currently, this is a product limitation. Our engineers have thought about it, and currently the only way of solving it would be to parse the content of all the JournalArticles: analyze all the journal table when modifying each file entry, which could have a very big impact in the portal performance. The only way to avoid this limitation would be to use non-cacheable templates since the proper solution, which would be to store these relations between Web Contents and Documents in a normalized way by using a new table, would be out of the scope of a regular fix. We invite you to create a feature request to study if it is feasible to solve this in future product releases.'
Steps to reproduce the Fix from
LPS-140954 and the cache problem
- start a vanilla liferay fp 13 with the hotfix from
- upload a normal text document to the document library
- check the download link for this document and take a note of the timestamp in the download-url
- go to "Content & Data" and "Web Content" and add a basic web content
- add a text
- mark the text and add a link to the previously added text document in the document library via "select item dialog"
- publish the web content
- open the preview of the web content and compare the timestamp in the url with the timestamp of the download-url - they are the same
- go to the document library
- edit the previously uploaded text document and upload a new text file
- check the download link for this document and take a note of the timestamp in the download-url - it has changed it is a different timestamp (parameter t)
- open the preview of the previously created web content and take a note of the document link - the url and the timestamp are still the old ones, nothing changed
- Stop Liferay
- Start Liferay
- open the preview of the previously created web content and take a note of the document link - the timestamp is now the new and same as the download-url in the the document library
- go to "Content & Data" and "Web Content" and edit the previously added basic web content
- you will see that there is still the old url and timestamp.
A changed document is only updated in the web content display view after a reboot of the system. In in the web content display edit-mode it is never updated.
If you change a document every web content which refers to it should be updated with the new link with the actual timestamp.
To fix the isssue completely the Liferay cache of Web Contents which refer to a document should be cleared if the document gets updated. To make that possible the relations between Documents and Web Contents should be stored. That could be done when saving (adding,update,delete) a content in the JournalArticleService.
Updating the articles using a Document could be done in the DLAppService.
We implemented a solution like I described and store the relations with a service builder service and implemented overrrides of DLAppService and JournalArticleService. Additionally we don't just clear the cache. We do what the TransformerListenerFix would do. We parse the content of the article for the updated FileEntry(document) and update the timestamp. With that solution we don't need the Fix with the TransformerListener any more. We will test our solution in the next days and will be in production after the 18.11.2021.
After that we will see if there are any performance issues.
Maybe our soultion could be a help to solve it your way.
|
OPCFW_CODE
|
my client has those errors on one of his computer Macbook Air OSX 10.12.6 (Sierra) with Pro Tools 12.5.0.
While on Mac Mini with the same OSX and the same Pro Tools version and the same copy of plugin, there is no problem.
As I’ve heard those errors (especially 7054) has something to do with 32bit version of plugins. But of course my plugins are 64bit.
So I have no idea what could I do.
I found in that thread similar problem, but I always launch the projects from Projucer and don’t use any libraries other than Juce, and the Juce libraries which I use are static (I mean compiled in outputed binary in that case aax).
Have anyone already solved such problems?
For any help great thanks in advance.
One of the issues is mentioned here:
Could you check whether your client has installed the iTunes Device Support Update?
Hello, thanks for your link. I’ve just asked my client to pass their instructions. We will see if it helps.
Unfortunately your link doesn’t help.
My client has on both machines the last version of iTunes 18.104.22.168
So I am still looking for solution.
If somebody could help I would be very appreciate, while I can’t even (don’t know how) replicate issue on my machine.
I’ve also found other thread in Avid knowledge base:
But I am still waiting for my client will follow that thread.
But I am afraid it would not help, while my client has two errors simultaneously AAE 7054 and 7058.
Hello. Were you able to fix the aae-7058 problem?
I have the same problem after connecting my iphone.
pro tools 12.6, sierra
To be honest I gave up. My client was two instances of my plugin on two two separated computers. One on Macbook Air where the problem exists, and second one on Mac Mini (with the same configuratoion) where everything is OK, so my client told me it’s not big deal for him.
But I am still very interesting how to fix it? So if you find the solution, please inform me here and I would be very appreciate.
Hello! I upgraded my OS to high sierra. Pro tools 2019.5.0. Problem no longer appear.
Hmm… Interesting. Thanks for your info.
I think this is the same bug being discussed here: Issue with Audio Unit on 10.11 - #22 by jaydee
It affects some plugins on the users’ systems, but not all of them. Unfortunately, my plugins using Juce are in the group that no longer works.
Juce team: any idea what is going on here? I’d love to have a solution, versus “yeah, Apple broke some plugins that happen to use Juce, sorry.”
It sounds to me like the problem is some breakage in the MobileDevice framework, which is linked via CoreAudioKit, which in turn is required for AudioUnit UIs and Bluetooth MIDI support. Therefore, I’d expect all AudioUnit plugins with a custom UI to be affected, but other formats (AAX, VST3 etc.) may only be affected if they link CoreAudioKit. If your users have functioning AudioUnit plugins, then either there’s a way to use CoreAudioKit without triggering the break, or there’s a way of avoiding the CoreAudioKit dependency altogether for some AudioUnits.
Can you find out whether your users have functioning third-party AudioUnits?
When I built with 10.12 SDK I didn’t had the issue.
Since adding arm version I had to move to the latest SDK (unfortunately you cannot have a per arch SDK unless you lipo manually even if xcode allows to set it up)
So it’s related to newer SDK, so other 3rd party AU can avoid this using older SDK
|
OPCFW_CODE
|
using NPOI.HPSF;
using NPOI.POIFS.FileSystem;
using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
namespace NPOI
{
/// <summary>
/// This holds the common functionality for all POI
/// Document classes.
/// Currently, this relates to Document Information Properties
/// </summary>
/// <remarks>@author Nick Burch</remarks>
[Serializable]
public abstract class POIDocument
{
/// Holds metadata on our document
protected SummaryInformation sInf;
/// Holds further metadata on our document
protected DocumentSummaryInformation dsInf;
/// The directory that our document lives in
protected DirectoryNode directory;
/// For our own logging use
protected bool initialized;
/// <summary>
/// Fetch the Document Summary Information of the document
/// </summary>
/// <value>The document summary information.</value>
public DocumentSummaryInformation DocumentSummaryInformation
{
get
{
if (!initialized)
{
ReadProperties();
}
return dsInf;
}
set
{
dsInf = value;
}
}
/// <summary>
/// Fetch the Summary Information of the document
/// </summary>
/// <value>The summary information.</value>
public SummaryInformation SummaryInformation
{
get
{
if (!initialized)
{
ReadProperties();
}
return sInf;
}
set
{
sInf = value;
}
}
protected POIDocument(DirectoryNode dir)
{
directory = dir;
}
/// <summary>
/// Initializes a new instance of the <see cref="T:NPOI.POIDocument" /> class.
/// </summary>
/// <param name="dir">The dir.</param>
/// <param name="fs">The fs.</param>
[Obsolete]
public POIDocument(DirectoryNode dir, POIFSFileSystem fs)
{
directory = dir;
}
/// <summary>
/// Initializes a new instance of the <see cref="T:NPOI.POIDocument" /> class.
/// </summary>
/// <param name="fs">The fs.</param>
public POIDocument(POIFSFileSystem fs)
: this(fs.Root)
{
}
/// Will create whichever of SummaryInformation
/// and DocumentSummaryInformation (HPSF) properties
/// are not already part of your document.
/// This is normally useful when creating a new
/// document from scratch.
/// If the information properties are already there,
/// then nothing will happen.
public void CreateInformationProperties()
{
if (!initialized)
{
ReadProperties();
}
if (sInf == null)
{
sInf = PropertySetFactory.CreateSummaryInformation();
}
if (dsInf == null)
{
dsInf = PropertySetFactory.CreateDocumentSummaryInformation();
}
}
/// <summary>
/// Find, and Create objects for, the standard
/// Documment Information Properties (HPSF).
/// If a given property Set is missing or corrupt,
/// it will remain null;
/// </summary>
protected void ReadProperties()
{
PropertySet propertySet = GetPropertySet("\u0005DocumentSummaryInformation");
if (propertySet != null && propertySet is DocumentSummaryInformation)
{
dsInf = (DocumentSummaryInformation)propertySet;
}
propertySet = GetPropertySet("\u0005SummaryInformation");
if (propertySet is SummaryInformation)
{
sInf = (SummaryInformation)propertySet;
}
initialized = true;
}
/// <summary>
/// For a given named property entry, either return it or null if
/// if it wasn't found
/// </summary>
/// <param name="setName">Name of the set.</param>
/// <returns></returns>
protected PropertySet GetPropertySet(string setName)
{
if (directory == null || !directory.HasEntry(setName))
{
return null;
}
DocumentInputStream stream;
try
{
stream = directory.CreateDocumentInputStream(setName);
}
catch (IOException)
{
return null;
}
try
{
return PropertySetFactory.Create(stream);
}
catch (IOException)
{
}
catch (HPSFException)
{
}
return null;
}
/// <summary>
/// Writes out the standard Documment Information Properties (HPSF)
/// </summary>
/// <param name="outFS">the POIFSFileSystem to Write the properties into</param>
protected void WriteProperties(POIFSFileSystem outFS)
{
WriteProperties(outFS, null);
}
/// <summary>
/// Writes out the standard Documment Information Properties (HPSF)
/// </summary>
/// <param name="outFS">the POIFSFileSystem to Write the properties into.</param>
/// <param name="writtenEntries">a list of POIFS entries to Add the property names too.</param>
protected void WriteProperties(POIFSFileSystem outFS, IList writtenEntries)
{
if (sInf != null)
{
WritePropertySet("\u0005SummaryInformation", sInf, outFS);
writtenEntries?.Add("\u0005SummaryInformation");
}
if (dsInf != null)
{
WritePropertySet("\u0005DocumentSummaryInformation", dsInf, outFS);
writtenEntries?.Add("\u0005DocumentSummaryInformation");
}
}
/// <summary>
/// Writes out a given ProperySet
/// </summary>
/// <param name="name">the (POIFS Level) name of the property to Write.</param>
/// <param name="Set">the PropertySet to Write out.</param>
/// <param name="outFS">the POIFSFileSystem to Write the property into.</param>
protected void WritePropertySet(string name, PropertySet Set, POIFSFileSystem outFS)
{
try
{
MutablePropertySet mutablePropertySet = new MutablePropertySet(Set);
using (MemoryStream memoryStream = new MemoryStream())
{
mutablePropertySet.Write(memoryStream);
byte[] buffer = memoryStream.ToArray();
using (MemoryStream stream = new MemoryStream(buffer))
{
outFS.CreateDocument(stream, name);
}
}
}
catch (WritingNotSupportedException)
{
}
}
/// <summary>
/// Writes the document out to the specified output stream
/// </summary>
/// <param name="out1">The out1.</param>
public abstract void Write(Stream out1);
/// <summary>
/// Copies nodes from one POIFS to the other minus the excepts
/// </summary>
/// <param name="source">the source POIFS to copy from.</param>
/// <param name="target">the target POIFS to copy to</param>
/// <param name="excepts">a list of Strings specifying what nodes NOT to copy</param>
[Obsolete]
protected void CopyNodes(POIFSFileSystem source, POIFSFileSystem target, List<string> excepts)
{
EntryUtils.CopyNodes(source, target, excepts);
}
/// <summary>
/// Copies nodes from one POIFS to the other minus the excepts
/// </summary>
/// <param name="sourceRoot">the source POIFS to copy from.</param>
/// <param name="targetRoot">the target POIFS to copy to</param>
/// <param name="excepts">a list of Strings specifying what nodes NOT to copy</param>
[Obsolete]
protected void CopyNodes(DirectoryNode sourceRoot, DirectoryNode targetRoot, List<string> excepts)
{
EntryUtils.CopyNodes(sourceRoot, targetRoot, excepts);
}
/// <summary>
/// Checks to see if the String is in the list, used when copying
/// nodes between one POIFS and another
/// </summary>
/// <param name="entry">The entry.</param>
/// <param name="list">The list.</param>
/// <returns>
/// <c>true</c> if [is in list] [the specified entry]; otherwise, <c>false</c>.
/// </returns>
private bool isInList(string entry, IList list)
{
for (int i = 0; i < list.Count; i++)
{
if (list[i].Equals(entry))
{
return true;
}
}
return false;
}
/// <summary>
/// Copies an Entry into a target POIFS directory, recursively
/// </summary>
/// <param name="entry">The entry.</param>
/// <param name="target">The target.</param>
[Obsolete]
private void CopyNodeRecursively(Entry entry, DirectoryEntry target)
{
EntryUtils.CopyNodeRecursively(entry, target);
}
}
}
|
STACK_EDU
|
3 Ungluers have Faved this Work
Login to Fave
Herman Melville's second book, Omoo, begins where his first book, Typee, leaves off. As the author described the book, "It embraces adventures in the South Seas (of a totally different character from 'Typee') and includes an eventful cruise in an English Colonial Whaleman (a Sydney Ship) and a comical residence on the island of Tahiti." The popular success of Melville's first book encouraged him to write this sequel, hoping it would be "a fitting successor" to Typee, which delineates Polynesian life "in its primitive state, " while Omoo represents it "as affected by intercourse with the whites" and also "describes the 'man about town' sort of life, led, at the present day, by roving sailors in the Pacific." Wait Whitman found Omoo "the most readable sort of reading" and praised its "richly good-natured style." But many reviewers doubted the author's veracity and some objected to his "raciness" and "indecencies." Some also denounced his criticism of missionary endeavors, for Melville returned inOmoo to the attack upon the missionaries he had begun in Typee, making his second book more polemical than his first. Over the years, however, readers have been charmed by both books. The reading of Omoo influenced such later visitors to Tahiti as Pierre Loti, Henry Adams, John LaFarge, and Jack London; it was the book that sent Robert Louis Stevenson to the South Seas.
Pierre ou Les ambiguïtés is a translation of this work.
PIERRE ODER DIE DOPPELDEUTIGKEITEN is a translation of this work.
Pierre O Las Ambigüedades is a translation of this work.
Pierre is a translation of this work.
This book is included in Project Gutenberg.
Why read this book? Have your say.
You must be logged in to comment.
Rights InformationAre you the author or publisher of this work? If so, you can claim it as yours by registering as an Unglue.it rights holder.
This work has been downloaded 894 times via unglue.it ebook links.
- 235 - mobi (0.1) (PD-US) at Github.
- 158 - pdf (0.1) (PD-US) at Github.
- 142 - epub (0.1) (PD-US) at Github.
- 359 - epub (PD-US) at Project Gutenberg.
- Accessible book
- Autobiographical fiction
- Classic Literature
- Domestic fiction
- In library
- Male authors
- Male authors -- Fiction
- Male authors in fiction
- Men -- Fiction
- Men in fiction
- Protected DAISY
- Psychological fiction
Copy/paste this into your site:
|
OPCFW_CODE
|
I am very happy to share my slides of public talks, here is a list of my recent talks.
- Teaclave: A Universal Secure Computing Platform (Updated in Nov 2021)
- Teaclave TrustZone SDK (Updated in Jun 2021)
- Teaclave 安全并易用的隐私计算平台, ApacheCon Asia 2021
- Teaclave: A Universal Secure Computing Platform, ApacheCon @ Home 2020
- Teaclave: A Universal Secure Computing Platform, 2nd SGX Community Workshop, July 2020
- WebAssembly: History, Internals, Security, and More, 2020
- Rust TrustZone SDK: Enabling Safe, Functional, and Ergonomic Development of Trustlets, Linaro Connect San Diego 2019, San Diego, Sep 2019
- The Hitchhiker's Guide to Rust, Shanghai Jiao Tong University, G.O.S.S.I.P Summer School, July 2019
- Bringing Memory-Safety to Keystone Enclave, Open-Source Enclaves Workshop (OSEW 2019), Berkeley, July 2019 (YouTube, link)
- Rust OP-TEE TrustZone SDK, RustCon Asia, Beijing, April 2019 (link)
- Linux From Scratch in Rust, RustCon Asia, Beijing, April 2019 (YouTube, Bilibili, link)
- Building Safe and Secure Systems in Rust, RustRush, Moscow, December 2018 (link)
- Building Safe and Secure Systems in Rust: Challenges, Lessons Learned, and Open Questions, Northeastern University, Boston, October 2018
- Rust, Memory-Safety, and Beyond, Shanghai Jiao Tong University, G.O.S.S.I.P Summer School, July 2018
- MesaLock Linux: Towards A Memory-Safe Linux Distribution, G.O.S.S.I.P, Shanghai Jiao Tong University, 2017/2018
Some of my previous talks are uploaded to Speaker Deck. If you are interested, please take a look.
- introduction to Linux (history, philosophy and more)
- booting to the Linux kernel
- user space applications
- kernel insides (compilation, source code, data structure, scheduler, mm, etc)
- kernel driver development
- Android ART Runtime: A Replacement of Dalvik Runtime
- an introduction to Android new ART runtime (AOT compiler, runtime, oat binary file, etc).
- Rooting Your Device
- an introduction to rooting Android devices by exploiting various vulnerabilities.
- Writing a Crawler
- how to write a web crawler for research purpose.
- Android Security
- an introduction to Android security, my lecture slides for an MSc class.
- Paper Summary on Mobile Security in 2014
- Paper Summary on Mobile Security in 2013
Here are some open source projects I created/maintained on GitHub.
- Pass for iOS: an iOS app compatible with Password Store command line tool. Pass for iOS is a password manager using GPG for encryption and Git for version control. The app is written in Swift.
- Android Apps Crawler: an extensible crawler for downloading Android applications in third-party markets.
- Android Markets List: a list of Android apps markets including official and third-party from China, Russia, etc.
- MesaLock Linux: a memory-safe Linux distribution.
- MesaPy, a memory-safe Python implementation based on PyPy with SGX support.
- RPython by Example: a collection of runnable examples that illustrate various RPython concepts and libraries
- MesaBox: A collection of core system utilities written in Rust for Unix-like systems (and now Windows).
- State of Rust: Automatically summarize various stable/unstable features of Rust including compiler features and library features.
- YogCrypt: A fast, general purpose crypto library in pure Rust. YogCrypt currently provides three cryptographic algorithms in Chinese National Standard, namely the SM2 cryptographic asymmetric algorithm, the SM3 cryptographic hash algorithm, and the SM4 block cipher algorithm.
- Rust OP-TEE TrustZone SDK: Rust OP-TEE TrustZone SDK provides abilities to build safe TrustZone applications in Rust. The SDK is based on the OP-TEE project which follows GlobalPlatform TEE specifications and provides ergonomic APIs. In addition, it enables capability to write TrustZone applications with Rust's standard library and many third-party libraries (i.e., crates).
- Apache Teaclave (incubating): Apache Teaclave (incubating) is an open source universal secure computing platform.
|
OPCFW_CODE
|
Election forensics are methods used to determine if election results are statistically normal or statistically abnormal, which can indicate electoral fraud. It uses statistical tools to determine if observed election results differ from normally occurring patterns. These tools can be relatively simple, such as looking at the frequency of integers and using 2nd Digit Benford's law, or can be more complex and involve machine learning techniques.
Election forensics can use various approaches. Some approaches include looking at data distribution, particularly voter turnout, to look for outliers. Other approaches can include comparing the observed distribution of the digits themselves to typical digit distributions (Benford's law). Other signs of fraud are overrepresentation of round numbers rather than those with decimals, or overabundance of numbers that are a multiple of 5 (e.g. 50%, 70%, 75%). More recent and statistically advanced approaches use machine learning, as machine learning can incorporate a large volume of data and use several different statistical models instead of a single one.
Between 1978 and 2004, a 2010 review concluded that 61% of elections examined from more than 170 countries showed some signs of election fraud, with major fraud in 27% of all examined elections. Since the early 2000s, election forensics has been used to examine the integrity of elections in various countries, including Afghanistan, Albania, Argentina, Bangladesh, Cambodia, Kenya, Libya, South Africa, Uganda, Venezuela and USA.
Compared to other methodsEdit
Relative to other methods of monitoring election security, such as in-person monitoring of polling places and parallel vote tabulation, election forensics has advantages and disadvantages. Election forensics is considered advantageous in that data is objective, rather than subject to interpretation. It also allows votes from all contests and localities to be systematically analyzed, with statistical conclusions about the likelihood of fraud. Disadvantages of election forensics include its inability to actually detect fraud, just data anomalies that may or may not be indicative of such. This can be addressed by combining election forensics with in-person monitoring. Another disadvantage is its complexity, requiring advanced knowledge of statistics and significant computing power. Additionally, the best results require a high level of detail, ideally comprehensive data from the polling place regarding voter turnout, vote counts for all issues and candidates, and valid ballots. Broad, national-level summaries have limited utility.
- Stewart, Charles (2011). "Voting Technologies". Annual Review of Political Science. 14: 353–378. doi:10.1146/annurev.polisci.12.053007.145205.
- Hicken, Allen; Mebane, Walter R. (2017). A Guide to Elections Forensics (PDF) (Report). University of Michigan Center for Political Studies.
- Mebane, Walter Jr (2006). Election Forensics: The Second-digit Benford’s Law Test and Recent American Presidential Elections (PDF) (Report). Cornell.
- Zhang, Mali; Alvarez, R. Michael; Levin, Ines (2019). "Election forensics: Using machine learning and synthetic data for possible election anomaly detection". PLOS ONE. 14 (10): e0223950. Bibcode:2019PLoSO..1423950Z. doi:10.1371/journal.pone.0223950. PMC 6822750. PMID 31671106.
- Lacasa, Lucas; Fernández-Gracia, Juan (2019). "Election Forensics: Quantitative methods for electoral fraud detection". Forensic Science International. 294: e19–e22. arXiv:1811.08502. doi:10.1016/j.forsciint.2018.11.010. PMID 30527668. S2CID 54481752.
- Noonan, David (30 October 2018). "What Does a Crooked Election Look Like?". Scientific American. Retrieved 10 August 2020.
- "Notes on Election Forensics, Exit Polls, and Baseline Validation". CODE RED-Computerized Election Theft. 2018-08-08. Retrieved 2020-11-28.
|
OPCFW_CODE
|
I am new to KDE and I have come to like the plasmoids on the Desktop. They can be used to display information (eg. Weather, Time, News) or can be used to carry out some tasks (upload images/text to web, check mail etc) easily.
I don’t have any experience of Plasmoid development. I have decided to learn to write Plasmoids. I have moderate knowledge of Python.. I am learning Plasmoid development in Python and will be posting my experience. Feel free to learn with me or correct me if I am wrong.
I will be using Fedora 14 with KDE 4.5 for all my development and testing. However, other distros with any sub-version of KDE 4 should work fine. I will be citing all the references and where you can learn more.
I will be using Kwrite and KDevelop to develop plasmoids. However, you can pick any editor of choice.
For Hello World plasmoid, create a directory called hello anywhere you want. Also, create the following directory structure inside hello:
hello/ ├── contents │ └── code
The directory structure must be as shown above. or the plasmoid will not work.
Now, we will need a metadata for the plasmoid. This metadata file will contain the name, version, author-name, type of plasmoids and other information about the plasmoid. It should be inside the plasmoid directory (i.e. inside hello in this case). Here is my sample of metadata.desktop.
[Desktop Entry] Encoding=UTF-8 Name=Hello World Comment=A Basic Hello World Example Type=Service ServiceTypes=Plasma/Applet X-Plasma-API=python X-Plasma-MainScript=code/main.py X-KDE-PluginInfo-Author=_khAttAm_ X-KDE-PluginInfo-Emailfirstname.lastname@example.org X-KDE-PluginInfo-Name=hello X-KDE-PluginInfo-Version=0.1 X-KDE-PluginInfo-Website=http://www.khattam.info X-KDE-PluginInfo-License=GPL
Most of the fields are self-explanatory and to learn about the fields and other choices, see here.
As we have specified in X-Plasma-MainScript field in metadata.desktop, we will now create the file hello/contents/code/main.py. Note that you can use any filename for main.py but you will have to change in metadata.desktop accordingly.
The following is the very basic main.py which does absolutely nothing:
from PyQt4.QtCore import * from PyQt4.QtGui import * from PyKDE4.plasma import Plasma from PyKDE4 import plasmascript class MainApplet(plasmascript.Applet): def __init__(self,parent,args=None): plasmascript.Applet.__init__(self,parent) def CreateApplet(parent): return MainApplet(parent)
If the above plasmoid is run, you will see no output at all, just an empty plasmoid.
To run it, cd to the directory where you created the hello directory and run the following:
If everything goes right, you will be able to see something like the following:
If any error occurs, you can see the output in the terminal to see whats wrong and hopefully fix it.
After running the application, lets analyse what we have done here.
Line 1-4 are import statements. If you are familiar with Python, you must be familiar with them. They are basically statements which tells the interpretor where to look for the functions and classes used in the program. Those are the minimum ones you will need for your Python plasmoids to work.
Lines 6-8 define a class derived from plasmascript.Applet. There must be at least one such class for your plasmoid to work. The name of the class may be anything you like. The class’ __init__ function initializes just the same function of base class which is to say it does nothing at all. While developing plasmoids later, you may want to do some stuff even prior to initialization of plasmoid. That stuff goes in this function.
There must be yet another function Createapplet() which must take 1 argument “parent” and pass it as initializer for the object of class derived from plasmascript.Applet and return the object. This function is in lines 10-11.
We have returned the Applet object as it is without modifications. We will now add some stuff to the Applet object before returning. In this tutorial, we will just add a label saying “Hello World”. To add a label however, we need to have a layout. So,we set a layout and then add a label on top of it.
To do that, we will be using init() method which is called after the applet is initialized and added. Here is the new code for main.py:
from PyQt4.QtCore import * from PyQt4.QtGui import * from PyKDE4.plasma import Plasma from PyKDE4 import plasmascript class MainApplet(plasmascript.Applet): def __init__(self,parent,args=None): plasmascript.Applet.__init__(self,parent) def init(self): layout=QGraphicsLinearLayout(Qt.Vertical, self.applet) label = Plasma.Label(self.applet) label.setText("Hello World") layout.addItem(label) self.applet.setLayout(layout) def CreateApplet(parent): return MainApplet(parent)
The added code is the init() from line 10-15. First, a variable layout is set as QGraphicsLinearLayout(). This takes two arguments, one is the parent and the other is either Qt.Vertical or Qt.Horizontal. This layout orientation parameter decides whether the added items (if there are more than one) are arranged vertically or horizontally.
In line 12, a new Plasma.Label object is created. It takes in parent as the constructor argument. Other plasma widgets are available here. There is no Python specific documents as of now but the C++ documents work fine and it is easy to figure out how to use it in Python.
In line 13, we use a method setText() of Plasma.Label to set the text of the label and in the next line, we add the item to the layout we created earlier. Finally, in line 15 we set the layout we created as the label for the current applet.
When done testing, you may want to install the plasmoid to see how it looks on the desktop. To do that, you must zip it install it using plasmapkg. Change to hello/ directory and run the following to zip it to hello.zip:
zip -r ../hello.zip ./
Now, run the following to install it:
plasmapkg -i hello.zip
Now, you can add the plasmoid to your desktop. To remove it, just run:
plasmapkg -r hello
Here is what we have at the end:
This is a simple tutorial from a newbie to create Plasmoids in Python. However, we did not do any useful work with it. In the next tutorial, I will try to cover my experience creating something useful.
|
OPCFW_CODE
|
How do you get a float in SQL?
If we want to get float or Decimal output, Either our denominator or Numerator should be float or decimal type. If we have both denominator or numerator as Integers, we can use convert or cast functions to convert one of them to float/decimal so we can get our results as float/decimal.
Is there a float in SQL?
Float & Real Data Types in SQL Server uses the floating-point number format. Real is a Single Precision Floating Point number, while Float is a Double Precision Floating Point number. The Floating point numbers can store very large or very small numbers than decimal numbers.
How does float work in SQL?
Float stores an approximate value and decimal stores an exact value. In summary, exact values like money should use decimal, and approximate values like scientific measurements should use float. When multiplying a non integer and dividing by that same number, decimals lose precision while floats do not.
What is float format in SQL?
Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly. Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type range can be represented exactly with precision and scale.
How do you declare a float variable in SQL?
This can be achieved with the decimal datatype. See below for an example: declare @num as float; set @num=5.20; select convert(decimal(10, 2), @num); The output here will be 5.20 .
What is float data type example?
Floating point numbers are numbers with a decimal. Like integers, -321, 497, 19345, and -976812 are all valid, but now 4.5, 0.0004, -324.984, and other non-whole numbers are valid too.
What is float precision in SQL?
float is used to store approximate values, not exact values. It has a precision from 1 to 53 digits. real is similar but is an IEEE standard floating point value, equivalent to float(24). Neither should be used for storing monetary values.
Is float a valid SQL type?
Because Characters, Numeric, and Float are all valid SQL types.
How do I view tables in SQL?
Then issue one of the following SQL statement:
- Show all tables owned by the current user: SELECT table_name FROM user_tables;
- Show all tables in the current database: SELECT table_name FROM dba_tables;
- Show all tables that are accessible by the current user:
Does float have decimal?
The float data type has only 6-7 decimal digits of precision. That means the total number of digits, not the number to the right of the decimal point.
What is float data type used for?
A column with the FLOAT data type typically stores scientific numbers that can be calculated only approximately. Because floating-point numbers retain only their most significant digits, the number that you enter in this type of column and the number the database server displays can differ slightly.
|
OPCFW_CODE
|
How do I create a .NET website?
Creating a Web application project and a Page
- Open Microsoft Visual Studio.
- On the File menu, select New Project.
- Select the Templates -> Visual C# -> Web templates group on the left.
- Choose the ASP.NET Web Application template in the center column.
- Name your project BasicWebApp and click the OK button.
What is a dot net website?
net is a top-level domain, also known as a TLD. Derived from the word network, it was originally developed for companies involved in networking technology. Today, . net is one of the most popular domain names used by companies all over the world to launch their business online.
Is .NET easy to learn for beginners?
Dot net is a user friendly and its very easy to learn .. I recommend to start with java because it is a strong and professional language and relatively simple compared to C + +.
How do I create a website using .NET core?
Net Core and select its version, then select Empty project template and click on create button. Now your website / project is open in Visual Studio. Open solution explorer and right click on your project name, then click on Add option, then click on New Folder. Right click on ProjectName -> Add->New Folder.
Is .com or .NET better?
Both are suitable for businesses and have specific purposes that differentiate them from one another. The main difference between .com and . net top level domains is that .com is intended for commercial use, while . net is best for network services.
Does .NET require coding?
net development, you need to be proficient in programming language and coding. C# is one of the preferred languages by . net developers. There are many other programming languages out there like Python, Java, C++ and so many more.
Can you build a website with C#?
Can I use .net for a business?
net isn’t a good option for your business in most cases. The “com” in the .com domain name indicates a “commercial” site. This can cover business websites, websites that want to make money online, personal websites, blogs, portfolios, and more.
Is .net good for SEO?
In the beginning of SEO days, it was going around that .com is the best for SEO and that . net is not as good.
Is dot net outdated?
NET is dead as a future framework for web applications and software. Microsoft won’t be building for it and they won’t support it. But software that already runs on . NET and is no longer being updated will still run on it.
Is dot net hard to learn?
ASP.Net is a high-speed and low-cost programming language that is widely used to create websites and applications. It is very easy to learn and requires minimal setup and resources. Moreover, it is a widely used and very popular programming language. There are huge opportunities available for .
Is .NET better than Django?
Django is a Python-based free and open-source web application framework. ASP.NET is an open-source, server-side web-application framework designed for web development to produce dynamic web pages….Market share.
|Microsoft ASP.NET||86% 86%|
|
OPCFW_CODE
|
We've been shooting stuff in videogames for a long, long time. In fact, the very first videogame was all about two players trying to shoot each other. We've mostly got the shooting figured out, but conversations are still a bit wobbly as a gameplay mechanic. They're getting better, but there's still room for improvement.
For example ...
It shouldn't be possible to accidentally say something.
Here is how it always goes: At some point, I'll miss out on what a character is saying because of mumbling / thick accents / in-game sound effects / ambient noise in the room where I'm playing. So I turn on subtitles. Like most people, I can read faster than people can talk, and so with subtitles on I end up reading the text and then waiting for the character to finish speaking. This gets annoying after a while, so I start hitting the "skip" button when I'm done reading. However, if I happen to finish reading just as the character is done with their line, then the response selector appears just in time for me to accidentally select a response.
A simple solution here is to have different buttons for "skip dialog" and "select response". Another option is to have the conversation wheel default to a neutral position with no default response, so that I must make a deliberate selection before the button will do anything.
It should always be clear what I'm about to say.
I don't know if it was BioWare that pioneered the "summary" style dialog wheel, but I first encountered it in the original Mass Effect. I can completely understand the thinking behind this feature. It's annoying to read all of the possible responses in full, select one, and then have your own character read it back to you. It really breaks the flow of the exchange.
On the other hand, sometimes the summary text can be confusing, ambiguous, or completely misleading. It leads to conversations where you end up saying the opposite of what you want:
JERK CHARACTER: Ha ha! I have kicked half the puppies in the city!
Hm. I don't like this puppy-kicking business, so I'll choose 'outraged'.
MY CHARACTER: What? Only half the puppies have been kicked? Now I'm going to have to go out and kick the rest of them myself!
In BioWare games this is bad because the paragon / renegade system punishes you for "moral" inconsistency. In Alpha Protocol this was bad because the game always auto-saved just after every conversation, thus making the goof canonical.
My suggestion is to sit down with a group of people - you don't even have to be playing the game - and read your playtesters each option and ask what they think would happen if they chose it. Keep working at it until the choices are clear. There is nothing more annoying than having to reload my game because muddled dialogs just screwed up my character.
I should be able to leave the conversation whenever I want.
I didn't even know this was feasible until I played Skyrim and saw how perfectly awesome it was, and now I never want to play a game without this feature.
It doesn't seem to be that hard, either. If the NPC I'm talking to is boring me, or taking too long to get to the point, or if I've accidentally entered conversation with the wrong person, or if they're pissing me off and I've decided I'd rather murder them, then I just hit exit to walk away from the conversation.
|
OPCFW_CODE
|
encoding/protojson: opt out of whitespace randomization
Commit https://github.com/protocolbuffers/protobuf-go/commit/582ab3de426ef0758666e018b422dd20390f7f26 added randomization to the multi-line json output.
I understand there are good reasons for doing this. I hope you will understand I also have good reasons for not doing this. Among others, generating a protobuf with fake data, re-rendering it on every test run is one of the best ways to detect drift in the configuration, accidental or purposeful.
I can handle the output changing across multiple versions of the protobuf library. It's much easier for me to sync the library version than it is to sync Go versions. If there are breaking changes that generate a diff, I can manage them all at once, instead of
Is there any way to opt out? A UnsafeDisableRandomization flag, or something. Right now I am probably going to make a second pass with a marshaler that does not generate random whitespace.
The reason why we were unable to make improvements to both text and JSON was because of the many hard-coded tests that expected the output to be byte-for-byte identical. We're not going to repeat the mistake again with the new prototext and protojson implementations.
I understand that stable output has benefits, but unless there is a canonical definition (which there isn't) for how to serialize protobufs in text and JSON, we're not going to pretend like one exists.
If you want a frozen output, then use github.com/golang/jsonpb. The output will be stable, but will also continue to have buggy behavior in some cases and it won't received any improvements. If you want a decent amount of stability, then pass the output through a formatter. For example, just parsing the JSON with encoding/json and formatting it again should provide some degree of stability.
This is what I'm doing, for the moment
marshalerOnce.Do(func() {
marshaler = &protojson.MarshalOptions{
EmitUnpopulated: true,
Indent: "",
Multiline: true,
UseProtoNames: true,
}
})
data, err := marshaler.Marshal(cfg)
if err != nil {
return err
}
data = append(data, '\n')
var rm json.RawMessage = data
data2, err := json.MarshalIndent(rm, "", " ")
if err != nil {
return err
}
_, err = w.Write(data2)
return err
That seems reasonable?
Closing as we're not going to be changing the randomization. Analysis of all tests at Google (we have a lot) for text and JSON usages indicates that 89.5% do not care about the exact output. Of the remaining, 10.4% were brittle tests that really should have been written to perform a semantic comparison on the structured messages (rather than a byte-for-byte comparison on the serialized output). The remaining 0.1% were cases like generated configurations and/or code where a degree of stability is actually needed. For that 0.1%, the workaround is to pass the output to a JSON formatter or text formatter to produce more stable output.
If randomization pushes the 10.4% to properly perform semantic comparisons, then I'll be really happy. Since a workaround exists for the 0.1% with a legitimate need for some degree of stability, I don't think a opt-out is justified.
I don't think a opt-out is justified.
It's slightly amusing that your own tests disable randomization 😅
https://github.com/protocolbuffers/protobuf-go/blob/c2a26e757ed57746693fca53de0b6136a8e46b74/encoding/protojson/encode_test.go#L32-L33
Those frustrated by the randomization should be focusing on the development of a stable output format: #1121
I don't think a opt-out is justified.
It's slightly amusing that your own tests disable randomization, and yet you don't want to expose that to users 😅
The protobuf library is subject to the contract of the standardization. However, certain things (e.g. ordering of fields in a message) are undefined in the contract, but this library is still an authority on what its own self should be producing. Knowing how we have chosen those implementation details, we can generate golden tests dependent upon our own implementation details. We remain, after all, the authority on what we expect our code to generate, even if the contract is silent on certain details. We’re allowed to make stronger assumptions about the output than the standard provides. After all, if we change our output, we can also change our tests at the same time, and then the deltas of golden examples even documents how our output has changed.
Meanwhile, users of this package are not in that same situation. If this package changed the ordering of fields, it would break anyone else using golden test, even though the two variations are semantically identical. Their tests would break, even though the output itself remains valid and within the standard. So their golden tests fail, even though their code has not broken. It’s generally bad form to break people’s tests for non-functional issues.
Trying to keep Hyrum’s Law at bay is part of us looking forward to a potentially future formal clarification of deterministic output. We should hope that even if that standard requires us to make drastic changes, that no one using this package now should break tests one we implement it.
|
GITHUB_ARCHIVE
|
"""Utility functions for calculating evaluation metrics by accelerating gpu. Work In Progress"""
import torch
class DifferentLengthList(object):
def __init__(self, lists):
"""
Args:
lists (list(torch.LongTensor)): list of tensor which has different length.
For now, we limit the element in the list is one-dimensional tensor.
"""
# Initialize `batch_size`
self.batch_size = len(lists)
# Build up `indice`
self.indice = torch.empty((self.batch_size+1,), dtype=torch.long)
self.indice[0], tmp = 0, 0
for idx, list in enumerate(lists, start=1):
tmp += list.shape[0]
self.indice[idx] = tmp
# Build up `data`
self.data = torch.empty((self.indice[self.batch_size],))
for idx, list in enumerate(lists, start=0):
self.data[self.indice[idx]:self.indice[idx+1]] = list
def get(self, idx):
return self.data[self.indice[idx]:self.indice[idx+1]]
def accuracy(pred, true, total_size):
"""Calculate accuracy.
Args:
pred (torch.LongTensor): [batch_size, prediction_length]
true (BatchSentence): [batch_size, true_length]
total_size (torch.FloatTensor): [batch_size] or scalar tensor when the `total_size` is equal over all examples.
Returns:
torch.FloatTensor [batch_size]
"""
raise NotImplemented
def precision(pred, true):
"""Calculate precision.
Args:
pred (torch.LongTensor): [batch_size, prediction_length]
true (torch.LongTensor): [batch_size, true_length]
Returns:
torch.FloatTensor [batch_size]
"""
raise NotImplemented
def recall(pred, true):
raise NotImplemented
def mean_average_precision(pred, true):
raise NotImplemented
|
STACK_EDU
|
How do atoms in a solid "communicate" force to each other?
What is the mechanism that carries and communicates force in a solid, on the atomic level?
Is there some other mechanism besides atomic deformation and proximity?
That is, if I had an infinitely incompressible substance and put it on top of my hand and hit it with a hammer, would my hand feel anything, if there is no difference whatever in the movement, shape or location of the substance?
(If you think the substance will move down in response to the hammer, remember that it's incompressible, so the top of the substance can't move down faster than the bottom, and the whole thing can't immediately move down, or we would have sent information to the bottom instantly, i.e. faster than the speed of light. How, then, would the information be communicated throughout the substance that it's time to gain downward momentum?)
If you argue that "infinitely incompressible" is a ridiculous scenario to assume, :) because it violates the compressibility of all matter, then is it that basic compressibility of all matter that provides the mechanism by which larger-scale-than-nuclear force is transmitted from atom to atom? In other words, is all force on the larger-than-atomic scale the result of inter-atomic and infra-atomic compression/deformation/shifts in density?
What do you mean by "atomic deformation"? Atoms aren't little solid balls, and so "proximity" is also a little ill-defined. The forces between atoms are of electromagnetic and Pauli-repulsive nature.
See also http://physics.stackexchange.com/questions/23797/what-does-it-mean-for-two-objects-to-touch
"Infinitely incompressible" implies that the speed of sound / pressure waves is infinite (larger than the speed of light).
But even if you accept that the medium has this property, then the hammer you hit it with might not. The pressure at the point of impact of the hammer corresponds to the fraction of the hammer that is instantaneously seeking the impact (the back of the hammer doesn't initially "know" that the front has hit your incompressible surface.
But getting back to the essence of your question - all material deforms under stress - longitudinally and laterally. The fact that a solid has an initial volume / shape comes about from an equilibrium between attraction and repulsion - sitting at equilibrium (the bottom of a potential well) there will always be a small range over which displacement gives rise to a linear force.
None of which exactly "answers" your question because it is based on two contradictory premises - I hope it is nonetheless helpful.
Thanks! With your words in mind, would you agree then that it's the basic compressibility of all matter that provides the mechanism by which larger-scale-than-nuclear force is transmitted from atom to atom?
Not quite. Compressibility is a consequence of the inter atomic forces (atomic spacing finds equilibrium at the bottom of the potential well formed by the interplay of attraction and repulsion); these forces give rise to the transmission of pressure. Compressibility is not itself the mechanism.
I see what you're saying. So in a graph of attraction and repulsion, compressibility would be represented as the ability for the distance (x axis) to vary, and with it, the repulsion (y axis) would be greater than the attraction (when compressed) or less than the attraction (when un-compressed)? So the mechanism isn't the compressibility itself, it's the nonlinear relationships between distance, attraction and repulsion, which provide a gradient of force that translates compression into resistance?
I will address this part, which is essential in understanding anything further about matter.
What is the mechanism that carries and communicates force in a solid, on the atomic level?
At the level of collection of atoms all the "forces" are electromagnetic. Atoms are neutral, but they have shapes from the orbitals of the electrons which "orbit" the nucleus, as an example:
The five d orbitals in ψ(x, y, z)2 form, with a combination diagram showing how they fit together to fill space around an atomic nucleus.
These shapes allow the positive charges of the nucleus to spill out and generate attractive forces, the atoms fitting into molecular and lattice shapes, conceptually fitting like Lego blocks, the electric fields defining the lattices and the strength of the solid.
Any pressure on the lattice will be transmitted by electromagnetic forces, and cannot be faster than the velocity of light. In fact it is the collective behavior of the ensemble of atoms/molecules that will define the thermodynamic quantities like pressure etc for the solid, an emergent behavior from the collective behavior of the underlying atomic level.
Thanks also for this background. So would you agree then that it's the basic compressibility of all matter that provides the mechanism by which larger-scale-than-nuclear force is transmitted from atom to atom?
Compressibility is a collective phenomenon, emergent on a large number of atoms, like pressure, and one level up from the atomic level. The basic atomic level interacts with electromagnetic forces. Large collections of atoms display compressibility, as a shorthand of describing zillions of atomic electron orbitals.
keep in mind that one mole of matter comprises of order 10^23 molecules/atoms
|
STACK_EXCHANGE
|
Tacit Knowledge and the Wisdom of Crowds
Store Associates in specialty retailers are sometimes challenged in finding quick answers to customer questions about products. They need a quick way to find information. Search capabilities that can be accessed easily using kiosks and mobile devices can make the store associate far more effective. In addition to the documented information there could be a lot of tacit information within the organization that the employee (and hence the customer) could benefit from.
By definition, tacit knowledge is knowledge that people carry in their minds and is, therefore, difficult to access. Tacit knowledge has been found to be a crucial input to the innovation process. An organizations ability to innovate depends on its level of tacit knowledge of how to innovate. Specialized professionals acquire formal knowledge during their education, but to be effective they must acquire tacit knowledge and this is done through some sort of apprenticeship or internship. Collaboration with people is one of the ways to acquire tacit knowledge and collaboration tools can play a very important role. The tools should not only allow you to find relevant people based on their knowledge of specific areas or topics but also be able to connect you with them in a seamless manner. People engage in more effective tacit interactions when organizational structures do not get in the way and when they have the tools to make better decisions and communicate quickly and easily. To encourage more interaction, innovation, and collaboration, organizations need to become more transparent and break down the barriers to effective interactions.
A knowledge management portal based around Microsoft MOSS 2007 suite of products is the easiest way to elevate the importance of people and collaboration over hierarchical structures. People search capabilities in MOSS 2007 allow users to find people not only by department or job title but also by expertise, social distance, and common interests. The blog/wiki capabilities in Sharepoint allow organizations to capture and share some of the tacit knowledge.
Microsoft Office SharePoint Server 2007 helps your organization get more done by providing a platform for sharing information and working together in teams, communities and people-driven processes. Office SharePoint Server is an important part of the overall Microsoft collaboration vision and integrates with other collaborative products to offer a comprehensive infrastructure for working with others.
|•||Empower Teams Through Collaborative Workspaces
Microsoft delivers a best-of-breed collaborative infrastructure that gives end users the tools to easily create their own workspaces and share assets across teams, departments, and organizations while maintaining IT control.
|•||Connect Organizations Through Portals
Microsoft will help bring the full insight and data of the organization to the right people at the right time by making it easy to connect people with line-of-business data, experts, and business processes across the organization.
|•||Enable Communities with Social Computing Tools
Microsoft gives organizations the tools to deliver a broad set of social computing capabilities within their existing workspace and portal infrastructure, so end users can more easily harness the collective intelligence of the organization.
|•||Reduce Cost and Complexity for IT by Using an Integrated Infrastructure, Existing Investments, and an Extensible Architectural Platform
The Microsoft collaboration infrastructure leverages existing investments, is extensible, and interoperates with other systems, so organizations can maintain a lower cost of ownership and more easily meet business demands by building a single infrastructure.
Microblogging tools and Social Networking tools also are great way to tap into the "Wisdom of Crowds". The concept of the "Wisdom of Crowds" is a fundamental building block of a lot of the Web 2.0 services. Many sites like Digg and Wikipedia rely on the concept of crowds being wise. The "Wisdom of Crowds" can help make decisions about which movie to watch, books to read, places to holiday in and for the Retail environment, it helps shoppers make decisions on which product to buy. Tools like Twitter can also help people pose a question to their network (or to anyone who cares to respond) and tap into tacit knowledge.
Interesting White Paper:
|
OPCFW_CODE
|
Alibaba Cloud continuously increases the algorithm development and model training capabilities of Data Science Workshop (DSW) of Alibaba Cloud Machine Learning Platform for AI. Our current focus is on expanding its big data analytics capability. Based on the built-in pyodps, which enables you to read MaxCompute data, DSW now supports interactive big data analytics. DSW streamlines all the big data analytics tasks, including data ingestion, data exploring and analysis, algorithm development, model training, and model deployment by using PAI EAS built-in Processors. DSW now offers the best all-in-one and interactive experience for algorithm development and data analytics.
After DSW is upgraded, you can write SQL statements in DSW. The built-in SQL editor supports multiple functions, such as syntax highlighting, auto lookup, and auto completion. After the configuration, you can run these SQL statements to read data from MaxCompute tables in different projects and then display the analysis results in SQL charts.
dswmagic is a built-in notebook magic command in DSW. To use the big data features provided by DSW, you only need to install the relevant package, and then load the dswmagic command.
After you load the dswmagic command, add a cell to the .ipynb file, and select the SQL editor for the cell. The cell is then switched to SQL edit mode.
Before you start to write SQL statements, you must specify the projects of the source MaxCompute tables, the endpoint of your DSW instance, and your AccessKey information. You can reuse the configuration in subsequent big data analytics tasks. Click the Plus (+) icon next to New DataSource to open the Config DataSource dialog box, and then enter the required information. The specified data source is then added to the drop-down DataSource list. You can reference a data source by selecting it from the list.
AccessKey ID: your Alibaba Cloud AccessKey ID.
AccessKey Secret: your Alibaba Cloud AccessKey secret.
Endpoint of DSW P100 instances deployed in China (Beijing) and DSW M40 instances deployed in China (Shanghai): http://service-all.ext.odps.aliyun-inc.com/api
Endpoint of other DSW instances: http://service.cn.maxcompute.aliyun.com/api
After you prepare the data sources, you can start to write SQL statements in DSW. You can use the SQL editor to run one or more SQL statements. If you need to run multiple SQL statements at the same time, separate them with semicolons (;). The execution results are output line by line. DSW provides multiple methods for you to output the execution results of SQL statements, including the EXCEL format, histogram chart, pie chart, line chart, and scatter chart. You can click the Settings icon in the upper-right corner of these charts to adjust the X-axis and Y-axis, or click the WebExcel icon to edit the execution results in Excel mode. The execution results are saved to variable df0. df0.values is a standard Pandas DataFrame. DSW has optimized values returned by Pandas DataFrames, allowing you to view the execution results in WebExcel mode or charts.
The big data analytics features provided by DSW offer an easier way to ingest data, a better experience to write SQL statements, and a powerful tool to analyze data. DSW can convert execution results of SQL statements to standard Pandas DataFrame. Trained models can be deployed as services much faster than they used to be. This will continuously improve algorithm development efficiency.
|
OPCFW_CODE
|
User-flow built for you.
Each workspace, contains a list of user to role connection, simply many users with different roles can exist. There are several ways that a user could be added to a workspace, or get different roles.
When a user creates a public account (for cloud projects for example), fireback creates his account, passport (email or phone) and an empty workspace. This is not optional, and the reason is every project at some point will need team (workspace) support, where multiple people with different access roles could modify or query the data. Considering this fact, it's always good to design database and your backend - as we did - to support fully the multi-user multi-team structure.
Now, when you are admin in the workspace (or
PERM_ROOT_WORKSPACES_INVITE at least) you can add users
into the workspace. This is complex matter in fact, taking into account all different methods that you need to cover,
(which we covered in fireback).
In invitation system, you do not create a new user in the system, rather sending an invitation, in form of an email or sms code, and give link or instructions how he can join the system. They will complete sign-up by themselves, while having their own account, would be joining to the workspace with role specified in the invitation. This is common on cloud projects, apps like slack, google cloud, etc.
In adding method, you create account with a password for them, administration might be able to see the password, and the signup process is complete. This is suited for internal software, ERP or IT department which there is direct management for each person. In such cases, you can even prevent users to create account or workspaces.
When we want to add a person in the system of ours, there are few situations that would appear.
Assume you are admin in a workspace, and you want to invite a friend, which never used this product. He either needs to create an account with email or phone (other options might be available).
For this, you need to send an invitation via email, and he would see a signup form, He chooses his own email address if he wants to, means you just give that person a permission to join, and primary email you have used in fact is not forced, and it was a contact method.
After he completes the signup, he will be added as well to your workspace.
This case is similar to the case above, just the differece is you want that user join to the app on very exact same email address you invited them, or phone number. This is suited for company wide email addresses, where everyone has to have same email address company designated for them.
In this scenario you will send invitation to a user which has an account in the system. If you are not forcing the passport, they might be able to accept the invite with the same account they have. Meanwhile they would login to their account, they could be asked which workspace they like to work on, and switch between them with complete privacy.
|
OPCFW_CODE
|
csgo is broken, that's all
set all configs to maximum... 250+ FPS guaranteed
broken engine at it's finest
I'm not an expert but I think really it's a limitation of the Source engine, it doesn't utilize modern hardware as well as some new stuff. Add to that the high tickrate compared to most other shooters today.
With that said 160fps is not low at all. If it's stable you'll do perfectly fine even with a 144hz monitor. Don't bitch about 160fps in a game thats mostly played by kids with 5+ year old hardware.
CSGO uses like 2 cores from your cpu, maybe 4, so having many cores is bad. Better with 4 cores that have higher capacity per core.
i5 2500K 1050X 16GB 250-350 o.O? cs is on SSD
well wanted 2 buy 7600k i5 but i can run shit w/ it so
did you set fps 0 or 350?
if 350 try 0
Do you have a dual or single channel ram?
It's just a terribly optimized game. Pretty much as well optimized as emulating a PS2 game. It only uses 2 of the 4 or 8 cores that you have.
csgo is optimized for cpu usage mostly, and for single core performance optimization on intel.
Sauce engine wont use more than 4 cores by default, and will use 1st physical core the most (core 0)
What you can do is when you are in game, CTRL+ALT+DEL, fgo to details, find csgo.exe, right click , set affinity and uncheck core 0. You will see more usage of other cores and slight performance boost on your ryzen.
Also, update bios and chipset drivers (really important), lower timings on your ram as tight as you can (google it, increase ram voltage to 1.35v if needed).
also use high performance power mode in windows.
I have r5 1400 @ 3.9ghz 1.3v +r9 290 on my side pc, and im getting 260~400 fps with some 330 average, even more on dd2, mirage, overpass).
buy a i9 with a 1080 t1 11 gb and btw 160 fps is not that bad go fuck yourself
Did you try -threads or mat_queue_mode 2 launch option?
|
OPCFW_CODE
|
import json
import os
import numpy as np
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "thomann.settings.dev")
import django
django.setup()
from lookup_hub import models
def main():
DICTIONARY_FP = "/home/aybry/dictionary.jsonl"
LANGUAGES = ["en", "de", "nl"]
cat_counter = 0
category = None
with open(DICTIONARY_FP, "r") as dict_f:
for line in dict_f.readlines():
row = json.loads(line)
print(row)
# input()
if (row["en"].get("colour") == "#0600CE"
or np.sum([
[(row[lang]["text"] is None or
row[lang]["text"].strip() == "") for lang in LANGUAGES]
]) == 2):
print(f"\nen: {row['en'].get('text')}")
print(f"de: {row['de'].get('text')}")
print(f"nl: {row['nl'].get('text')}\n")
text = row["de"]["text"]
feedback = input(f"[c]ategory, [r]ow, [s]kip\n")
if feedback in ["c", ""]:
print(json.dumps(row, indent=2))
text = row["de"]["text"]
category = models.Category(name=text)
category.save()
cat_counter += 1
continue
elif feedback == "r":
pass
elif feedback == "s":
continue
vars = dict(
en_text = row["en"].get("text"),
en_comment = row["en"].get("comment"),
en_colour = row["en"].get("colour"),
de_text = row["de"].get("text"),
de_comment = row["de"].get("comment"),
de_colour = row["de"].get("colour"),
nl_text = row["nl"].get("text"),
nl_comment = row["nl"].get("comment"),
nl_colour = row["nl"].get("colour"),
)
for key in [
"en_text",
"en_comment",
"en_colour",
"de_text",
"de_comment",
"de_colour",
"nl_text",
"nl_comment",
"nl_colour"
]:
if vars[key] is None:
vars[key] = ""
if "colour" in key:
vars[key] = vars[key].replace("#", "")
row_instance = models.Row(
category=category,
en_text=vars["en_text"],
en_comment=vars["en_comment"],
en_colour=vars["en_colour"],
de_text=vars["de_text"],
de_comment=vars["de_comment"],
de_colour=vars["de_colour"],
nl_text=vars["nl_text"],
nl_comment=vars["nl_comment"],
nl_colour=vars["nl_colour"],
)
row_instance.save()
main()
|
STACK_EDU
|
Admitting When You’re Wrong
Just recently, I have had to admit being wrong. Very wrong. Way back at the start of October, I was feeling the familiar sensation of panic and dread that only happens right before I need to give a presentation that includes a demo! In the end, there were major problems with the AV setup in the room I was allocated, so even arriving as early I could to set up didn’t give the techs enough time to hook up my laptop successfully.
One thing that I was very keen to demonstrate – aside from some nice new features in Java EE 8 – was how quick and easy it is to develop and build MicroServices packaged as Uber JARs with our new Payara Micro Maven plugin. Previously, if you wanted a single deployment unit, you would need to use the Maven Exec plugin to launch Payara during the build and output an Uber JAR. I had found this to be so time consuming in the past, that I never bothered during development.
Now, using the Maven plugin made such a difference, I thought this was as good as it gets! Build a project in a couple of seconds and start it in a couple more – how much faster could it possibly get?
Quite a lot faster, it turns out.
The Need For Speed
Even though the use of the new plugin had managed to get rid of a huge amount of time wasted during build, developing my JavaOne demo allowed me to experience some more “real world” pain of rapid minor changes with an application spread over a couple of microservices. Even with very lightweight demo apps, the pain of a couple of seconds to rebuild, followed by another couple of seconds to start is made much worse when several dependent services are involved!
For Payara Micro, there are a few ways this can result in time wasted:
Starting multiple instances on the same machine
My demo had just two microservices, though part of that demo was intended to demonstrate an early implementation of a clustered singleton bean. What this meant for my development process was that I needed to start Microservices A and B – wait for them to initialise, and then start a second instance of Microservice A to show that the singleton bean would only run on one instance.
Clustering instances together
Payara Micro uses Hazelcast for automatic clustering. This comes with an unavoidable startup cost because of the need for discovery of other cluster members. For a single server, there is a
--nocluster option to disable this but a major part of my demo is to show off the CDI Event Bus, whereby CDI events can be pushed over the network to be handled by other JVMs in the same cluster.
Enter The Dragon JRebel
To really get an idea of how useful JRebel can be when developing microservices, it really needs to be experienced. If you want to see my example for yourself, I’ve checked in a Java EE 7 version of my demo in to GitHub, with rebel.xml files already generated. Here are the steps to get this set up in your own environment:
1. Set up JRebel
Follow the appropriate JRebel Quickstart steps to get the JRebel agent installed and set up – either standalone or in your IDE.
2. Find the JRebel agentpath
Since the example uses Payara Micro Uber JARs, we need to know the path to librebel64.so (for a 64-bit JVM). This can be found from the “JRebel > Startup” configuration. Selecting “GlassFish 2.x, 3.x and 4.x” will output the path as a convenient JVM option. For me, it looks like this:
3. Get the example
Use a terminal to clone the example repository as follows:
4. Build the project
Run the maven “install” goal at the project root:
It’s important to run “install” because the Payara Micro Maven plugin has been configured to run the bundle goal at the install phase.
5. Run the services with the JRebel agent
The StockTicker service should be started first and the StockWeb service second. Using the
--autobindhttp option means that port conflicts won’t be an issue, just keep an eye on the logs after startup to see what URL to use. Open 2 terminal sessions and run the following commands from the project root directory in different sessions:
You should see Payara Micro boot and deploy in both terminals. The StockTicker service will start logging the Stock objects it is generating and the StockWeb service will start logging each time it receives a new Stock event:
You can also view the output in a browser:
Note that the port here is 8081 because port 8080 was already taken by the StockTicker service and
--autobindhttp incremented to find the next free port.
6. Improve the code!
Since I have included the necessary rebel.xml files in the repository, there should be no extra configuration needed. If this wasn’t the case, the configuration files can easily be generated in the IDE by enabling check boxes on the correct modules as shown below:
Since we already have everything configured for JRebel to work, we can make changes to the code right away. We will start by first making the “received stock event” message more informative. That way, we can more easily see how quickly the Stock events propagate between JVMs.
In the observer method of StockSessionManager.java in the StockWeb service, we will change the println() statement from:
7. Immediately reload the change!
In a separate terminal window, navigate to the project root directory and run
mvn clean install. You should immediately see JRebel update the class in StockWeb (even if you are running multiple instances of StockWeb, as shown below!):
8. Speed things up even more!
In the last step, we were still using the maven goals “clean install”, meaning that Maven first removes all existing compiled files and recompiles everything again. Additionally, since I have configured the Payara Micro Maven plugin to package the uber JAR at the install phase, the uber JAR itself gets rebuilt.
JRebel doesn’t actually require all this ceremony to detect and reload changes, though, so we can avoid some of that extra work by simply running “mvn compile” which will avoid rebuilding things without changes and, crucially, only compile the classes themselves. What this means is that we need to be careful to actually repackage the uber JARs if we wanted to distribute them for testing.
Some further speed enhancements can be found from reading through Rebel Labs’ Maven cheat sheet. The options I used were:
Make sure that Maven builds offline and doesn’t check for updates online
My (aging) desktop has a hyper-threaded quad core CPU, so this tells Maven to use 8 threads (no real effect in this small example, but has more effects in real use)
Probably the most important option – tell Maven to only compile the classes in the module where we’ve made changes.
All of these changes takes us from a 5.190 second build time:
…to a 1.110 second build time:
Why should I care?
JRebel solves an odd sort of problem in that most people don’t realise that they have a problem. As I mentioned above, I thought the development process I had was getting close to as good as you can get! The truth is, I simply hadn’t experienced anything different.
Let me illustrate.
Readers above a certain age will remember the age of dial-up Internet. I can still remember the days of having to disconnect when someone needed to make a phone call. Back then, I would visit websites and open multiple links in new windows (no tabs back then!) so that, after opening 7 or 8 links, I knew the first page would nearly be loaded. Browsing the Web in the 90s was an objectively slow experience, but that’s all anyone knew. Going back to those speeds would be unacceptably painful today, even when considering the smaller sizes of webpages in the 90s.
It’s easy to see the parallels.
Once you have experienced the faster turnaround times of developing with JRebel, it will be very hard to revert to the old way of doing things.
|
OPCFW_CODE
|
What are the differences between "Olam" "Netzach" "Selah" "Va'ed" "Adey Ad"?
What are the differences between "Olam" "Netzach" "Selah" "Va'ed" "Adey Ad"?
Closely related: https://judaism.stackexchange.com/q/10087 https://judaism.stackexchange.com/q/28333 . See also this article, and the Hebrew Wikipedia article.
No answer, just the way I think about this question. There are multiple kinds of "eternal" in Jewish thought: (1) will last as long as this world (ledoros = for the generations), (2) will last beyond olam haba -- infinite time, (3) is Beyond Time, Hashem's Eternity in that time isn't a relevent concept. And maybe until the end of history is shorter than until the end of olam hazeh, etc...
Partial answer:
'Eruvin 54a:
תנא דבי רבי אליעזר בן יעקב כל מקום שנאמר נצח סלה ועד אין לו הפסק עולמית
Sefaria translation:
A Sage of the school of Rabbi Eliezer ben Ya’akov taught the following baraita: Wherever it states netzaḥ, Selah, or va’ed, the matter will never cease. Netzaḥ, as it is written: “For I will not contend forever; neither will I be eternally [lanetzaḥ] angry” (Isaiah 57:16), which demonstrates that netzaḥ bears a similar meaning to forever.
Selah, as it is written: “As we have heard, so have we seen in the city of the Lord of Hosts, in the city of our God; may God establish it forever, Selah” (Psalms 48:9), which demonstrates that Selah means forever. Va’ed, as it is written: “The Lord shall reign forever and ever [va’ed]” (Exodus 15:18).
Ibn 'Ezra (T'hillim 3:3) brings various interpretations and concludes that sela serves to affirm that which was just stated, similar to amein:
והנכון כי טעם סלה כמו כן הוא או ככה ואמת הדבר ונכון הוא
Radak (ad loc.) concludes that sela is a musical direction that indicates an accent (similar to sforzando):
ואני אומר כי איננה מלת ענין ופרושה לשון הגבהה מן סלו סלו המסלה (ישעיהו ס״ב:י׳) כלומר באותו המקום שהיא נזכרת ונקראת זאת המלה היתה הרמת קול המזמור
|
STACK_EXCHANGE
|
Branchless comparison in x86_64
Recently I took the course of discrete math. The professor tells us that branching is slower than branchless.
AFAIK modern CPUs use pipeline to increase the efficiency, so breaching in CPUs are essentially assuming a result of previous instructions, do the operations base on it until the previous instructions finished then compare the assumption and actual result.
If match then the result based on assumption are accepted, else the operations will be redone using the actual value previous produced.
My questions are:
The real slow part is when assumption mismatch the product of previous, isn't it ?
1-1. If CPUs always guessed right, then it's actually faster right?
The following is x == y in x86_64 asm:
2-1. Branching
XeqY1:
xor rax, rax ; clear rax
cmp rdi, rsi ; compare X and Y
sete al ; set al = 1 if equal
2-2. Branchless
XeqY2:
mov rax, rdi ;
sub rax, rsi ;
sub rsi, rdi ;
xor rax, rsi ; tmp = (X-Y)^(Y-X)
not rax ; invert tmp
shr rax, 63 ; get sign bit
2-3. For above two, 2-1 only takes about 3 Cycle. 2-2 takes about 6 cycle, which is doubled of branching version. Does it really works faster? It's weird to me that 2-1 will be slower than 2-2 (according to professor).
EDIT 13:53
Thanks for the replies!!
I didn't know that SETcc and CMOVcc was branchless, what if change 2-1 to:
XeqY1:
cmp rdi, rsi
jne .L0
mov rax, 1
.L0:
mov rax, 0
Does this change make it non-branchless?
If so, will it be faster or slower to 2-2?
2-1 is also branchless.
Related: Which instructions can produce a branch misprediction on x86 CPUs? - only instructions that change RIP, so not setcc or cmovcc.
For 2-2, how do you figure 6 cycles? There's some instruction-level parallelism, and all real x86-64 CPUs are superscalar. It's obviously terrible vs. 2-1, but its latency critical path is at worst 5 cycles, maybe 4 with mov-elimination, for CPUs with 1 cycle latency for everything. (So maybe not Prescott P4 with its slow shifter). And throughput in terms of overlap with independent work, lots of room, only 6 uops, goes through the front-end of Skylake in 1.5 cycles. See this Q&A
There is a real effect to talk about, yes branch mispredicts are somewhat costly, but this code doesn't demonstrate it because sete isn't a branch. IDK exactly what question you want to ask after finding out that the premise of your example is false, but if you want to edit into something different we could reopen. But see Modern Microprocessors
A 90-Minute Guide! and What exactly happens when a skylake CPU mispredicts a branch? first.
Thanks for replying! The estimation of the cycle is from a chart i saw a while ago, IDK if the chart is correct or not, it's my fault not including the source but I forgot how did i search it. I thought as long as conditional involved, the branching will occur. I just newly learning assembly, and not yet take the courses about architecture or working principles. Really appreciate you for spending time on the questions!
You're right, your professor is over-simplifying. Predicted branches cost almost nothing; "branchless" prevents the CPU from using an assumed/predicted condition and is often worse than a predicted branch; and a mispredicted branch is much worse than "branchless". Also note that on 80x86 "branchless" is very limited and can only be considered for simple expressions (e.g. you can't do something like "if(x == 0) { close(file); }").
"Branch" means "conditional jump", where the actual flow of execution may change depending on the condition. In x86, that's precisely the Jcc instructions. The instruction after Jcc in memory may or may not be executed, depending on the truth of the condition. SETcc and CMOVcc are not branches, and the instruction following them will be executed no matter what.
Thanks again! So the reason for SETcc and CMOVcc are branchless is because they'll execute whatever follows behind, regardless of whether the condition is fulfilled, is that correct?
@洪明聖 It's because these are not conditional branch instructions. A conditional branch instruction is one that diverts the control flow depending on a condition. SETcc and CMOVcc do not divert the control flow and are not conditional branches. They are just arithmetic instructions taking the flags for inputs.
@fuz: I later remembered we had some better duplicates that explain this: Is CMOVcc considered a branching instruction? and Why is a conditional move not vulnerable for Branch Prediction Failure? - added those to the list of duplicates. (洪明聖 - you can notify people when you reply to their comments, with @ user like I did for fuz. Your reply didn't show up in my inbox; I just happened to look at this question again to add duplictes.)
@PeterCordes Sorry for that, I'm new to using this site. The articles you sourced to, that's exactly what I'm confused about. U R THE BEST!! Thank you!!
|
STACK_EXCHANGE
|
# standard libraries
import warnings
# ccbb libraries
import ccbb_pyutils.files_and_paths as ns_file
# project-specific libraries
import dual_crispr.construct_file_extracter as ns_extracter
# public
def id_library_info(library_name, library_dir):
libraries_dict_list = _extract_library_info_from_files(library_name, library_dir)
result = _id_and_validate_library(library_name, libraries_dict_list)
return result
# private
def _get_library_fp_key():
return "library_fp"
def _get_library_name_key():
return "library_name"
def _get_library_settings_keys():
return ["max_trimmed_grna_len", "min_trimmed_grna_len"]
def _get_settings_separator():
return "="
def _extract_library_info_from_files(library_name, library_dir):
libraries_dict_list = []
text_fps = ns_file.get_filepaths_from_wildcard(library_dir, ".txt")
for curr_fp in text_fps:
with open(curr_fp, 'r') as curr_file:
library_dict = _mine_library_file(library_name, curr_fp, curr_file)
if library_dict is not None:
libraries_dict_list.append(library_dict)
return libraries_dict_list
def _mine_library_file(library_name, curr_fp, curr_file):
result = None
first_line = curr_file.readline()
first_line_settings = _id_and_trim_settings_line(first_line)
if first_line_settings is not None:
if first_line_settings[0] == _get_library_name_key():
if first_line_settings[1] == library_name:
result = {first_line_settings[0]: first_line_settings[1],
_get_library_fp_key(): curr_fp}
# read as many lines as there should be settings
for i in range(0, len(_get_library_settings_keys())):
curr_line = curr_file.readline()
curr_settings = _id_and_trim_settings_line(curr_line)
if curr_settings is not None:
result[curr_settings[0]] = curr_settings[1]
return result
def _id_and_trim_settings_line(a_line):
result = None # assume line does not contain valid settings
if a_line.startswith(ns_extracter.get_comment_char()):
# Take any comment characters off the left end of the string, and then whitespace off both ends
trimmed_line = a_line.lstrip(ns_extracter.get_comment_char()).strip()
# split on = and trim result
split_line = [x.strip() for x in trimmed_line.split(_get_settings_separator())]
# a real settings line should split into two pieces (key and setting) on the settings separator,
# and neither of the two pieces should be empty strings. Note that in python an empty string
# evaluates to false, so all(split_line) "ands" together every string in split_line, and we are
# checking that the "and" of all these strings is true--i.e., every string is non-empty
if len(split_line) == 2 and all(split_line):
result = tuple(split_line)
return result
def _id_and_validate_library(library_name, libraries_dict_list):
# validate that we found one and only one library by this name
_validate_library_id(library_name, libraries_dict_list)
library_dict = libraries_dict_list[0]
# validate that all expected keys are present
_validate_settings_keys(library_name, library_dict)
# validate that all keys have values
_validate_settings_values(library_name, library_dict)
return library_dict
def _validate_library_id(library_name, libraries_dict_list):
# validate that we found one and only one library by this name
if len(libraries_dict_list) == 0:
raise ValueError("No library file found with library name '{0}'".format(library_name))
elif len(libraries_dict_list) > 1:
raise ValueError("Multiple library files found with library name '{0}': {1}".format(
library_name, ", ".join([x[_get_library_fp_key()] for x in libraries_dict_list])))
def _validate_settings_keys(library_name, library_dict):
expected_keys_list = _get_library_settings_keys()
expected_keys_list.extend([_get_library_name_key(), _get_library_fp_key()])
expected_keys_set = set(expected_keys_list)
found_keys_set = set(library_dict.keys())
# warnings done first so that we see them even if there are *also* errors later
ignored_keys = found_keys_set - expected_keys_set
if len(ignored_keys) != 0:
warnings.warn("Library '{0}' includes the following ignored settings: {1}".format(
library_name, ", ".join(ignored_keys)))
missing_keys = expected_keys_set - found_keys_set
if len(missing_keys) != 0:
raise ValueError("Library '{0}' is missing the following expected settings: {1}".format(
library_name, ", ".join(missing_keys)))
def _validate_settings_values(library_name, library_dict):
problem_keys = []
for curr_key, curr_value in library_dict.items():
if not curr_value:
problem_keys.append(curr_key)
if len(problem_keys) > 0:
raise ValueError("Library '{0}' is missing settings values for the following settings: {1}".format(
library_name, ", ".join(problem_keys)))
|
STACK_EDU
|
The development of low pressure areas and the timelines of these turning into hurricanes can vary from a few hours to a few days. A worst case scenario could have an advance notice of less than 12 hours, making it difficult to quickly obtain resources for an extensive set of investigatory model runs and also making it imperative to be able to rapidly deploy models and analysis data.
One obvious solution would be to dedicate a set of supercomputers for hurricane prediction. This would however require a significant investment to deploy and maintain the resources in a state of readiness; multiple sites would be needed to provide reliability, and the extent of the modeling would be restricted by the size of the machines.
A different solution is to use resources that are deployed and maintained to support other scientific activities, for example the NSF TeraGrid (which will soon be capable of providing over 1 PetaFlops of power), the SURAgrid (developing a community of resources providers to support research in the southeast US), or the Louisiana Optical Network Initiative (LONI) (with around 100 TeraFlops for state researchers in Louisiana). Section 3 describes some of the issues involved when resources are provided to both a broad community of scientists and to support urgent computing.
The impact of a hurricane is estimated from predicted storm surge height, wave height, inundation and other data. Coastal scientists provide estimations using a probabilistic ensemble of deterministic models to compute the probability distribution of plausible storm impacts. This distribution is then used to obtain a metric of relevance for the local emergency responders (e.g., the maximum water elevation or MEOW) and get to them in time to make an informed decision2. Thus, for every cycle there will be an ensemble of runs corresponding to the runs of all the models for each of the set of perturbed tracks. The SCOOP Cyberinfrastructure includes a workflow component to run each of the models for each of the tracks. The NHC advisory triggers the workflow that runs models to generate various products that are either input to other stages of the workflow or are final results that end up as visualized products. Figure 3 shows the SCOOP workflow from start to end and the interactions between various components.
During a storm event, the SCOOP workflow is initiated by an NHC advisory that becomes available on an FTP site that is continuously polled for new data. When new track data is detected, the wind field data is generated that is then pushed to the SCOOP archives using the Logical Data Manager (LDM) to handle data movement. Once the files are received at the archive, the archive identifies the file type and runs a trigger that triggers the execution of the wave and surge models. The trigger invokes the SCOOP Application Manager (SAM) that looks up the Ensemble Description File (EDF) to identify the urgency and priority associated with the run. The urgency and priority of a run and how the SCOOP system uses this information are elaborated in the next section.
|
OPCFW_CODE
|
I have a homework assignment that is asking me to loop through a mysql query and display the results in a table row. I have a snippet of what the finished assignment is supposed to look like down below. And I also need to I go inside the href for the “Details” line and echo out the id of each row. I also need to “total” # of customers as the screenshot down below shows.
I have spent 20-30 minutes Googling and reading the posts on various sites to try to get some help with this assignment but have not been able to get the help I need. I guess I need someone to help me walk through the rest of this assignment. My professor works in the private sector, teaches as an adjunct professor, and has a very big family to take care of, so he is not available a lot of time.
Can someone help me with some tips, suggestions, and/or point me in the right direction?
include 'connection.php'; $query = "SELECT id, first_name, last_name FROM customers ORDER BY last_name asc"; $stmt = $db->prepare($query); $stmt->execute();
<!doctype html>Granger Customers
<div class="row" style="margin-bottom:30px"> <div class="col-xs-12"> <ul class="nav nav-pills"> <li class="nav-item"> <a class="nav-link" href="index.php">Home</a> </li> <li class="nav-item"> <a class="nav-link" href="customers.php">Customers</a> </li> <li class="nav-item"> <a class="nav-link" href="">Search</a> </li> <li class="nav-item"> <a class="nav-link" href="">Add New</a> </li> </ul> </div> </div>
<div class="col-xs-12"> <p><strong>There are a total of XXX customers.</strong></p> <table class="table table-bordered table-striped table-hover"> <thead> <tr class="success"> <th class="text-center">#</th> <th>Customer ID</th> <th>Last Name</th> <th>First name</th> <th class="text-center">Details</th> </tr> </thead> <tbody> <!--start of row that you need to include for each item returned from the query--> <tr> <td class="text-center active">1</td> <td>20597</td> <td>Aadil</td> <td>Kareem</td> <td class="text-center"><a class="btn btn-xs btn-primary" href="details.php?id=20597">Details</a></td> </tr> </tbody> </table> </div>
|
OPCFW_CODE
|
How to move git commits from master to a different existing branch
I checked in a few commits to master that should have been checked into develop. What git commands do I used to remove those commits from the master branch and include in the develop branch?
Have you looked at this answer? - http://stackoverflow.com/questions/1773731/in-git-how-do-i-remove-a-commit-from-one-branch-and-apply-it-to-a-different-bra?rq=1
look at this: move-recent-commit-to-a-new-branch
since there's a "this question may already have an answer here" block at the top, just want to point out my question is not about moving to a new branch, it's about moving to a different existing branch
@gitq: See Dan Hoerst's question for a solution. It's called cherry picking and works with existing branches.
If I'm not mistaken, you had two synchronized branches,
master and dev, and simply forgot to switch the branch,
before your commits.
If that is the case, we have:
----------------
git log in dev
xxx
yyy
...
----------------
and:
----------------
git log in master
ccc
bbb
aaa
<---- here you forgot to switch branch
xxx
yyy
...
----------------
The solution is:
First, make sure, that:
git status -s
returns empty results.
Next, get all your new commits from master to dev with:
git checkout dev
git merge master
Now return to you master:
git checkout master
Remove unnecessary commits:
git reset --hard HEAD~3
The number ~3 is the number of commits you want to remove.
Remember: git status -s have to return empty results.
Otherwise, git reset --hard can cause data loss.
You can cherry-pick your commits over to the develop and afterwards interactively rebase your master branch:
git checkout develop
git cherry-pick aabbcc
git cherry-pick ddeeff
....
git checkout master
git rebase 123456 -i
where 123456 is a commit before you made a mistake. This will open an editor that shows every commit that will be affected by the rebase. Delete the lines that correspond to the commits you want to discard and quit the editor.
For coping into another branch you can use cherry picking:
git cherry-pick <commit>
Deleting is not that easy. You can use rebase and squash or edit the commit:
git rebase -i <commit>~1
But I am not sure when chosing edit during rebase if you can edit the files also rather than the commit message only.
edit during an interactive rebase lets you edit commit-message and files, reword only lets you modify the commit-message.
After this, my head is detached at this commit. How should I make these changes to reflect on the remote branch?
There are usually several ways to do the same thing in git one possible way:
git checkout develop
git cherry-pick XXX // XXX being the sha1 of the commit you want to grab
git checkout master
git rebase --interactive HEAD~IDX // IDX being the position of the last "good" commit compared to HEAD
The last command will display all review from HEAD to the last good commit and all you have to do is deleted the line of the commit to moved to branch develop
|
STACK_EXCHANGE
|