Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Geo-social visual analytics has been studied by Luo and MacEachren , who have found that in social network analysis, spatial data is treated as background information and in geographical analysis, network analysis is oversimplified. Hence, they have argued for integrating knowledge from both analysis for datasets which have both geographic as well as social network data, e.g. location based social networks. They have called this integrated analytical approach as geo-social visual analytics. In their work, they have proposed the theoretical framework where the contexts can be brought together, and they have reviewed the state-of-the-art in the three core tasks for geo-social visual analytics, namely, exploration, decision making, and predictive analysis, to find gaps. They have used this gap analysis to identify potential challenges and research directions. Here, we discuss the authors’ views on the conceptual framework and the core tasks of data exploration and decision making.
- The First Law of Geography says, “Everything is related to everything else, but near things are more related than distant things,” and the social network analysis dogma says, “Actors with similar relations may have similar attributes/behavior.” The authors have discussed how the First Law of Geography and the social network analysis dogma must be both combined to define geo-social relationships. At a conceptual level, these relationships are considered to be the intersection set of three different kinds of embeddedness. The different kinds of embeddedness are namely societal, network, and territorial. Thus, the conceptual framework for geo-social relationships states that “Nearness can be considered a matter of geographical and social network distance, relationship, and interaction.’
- The core task of data exploration must consider the relative strength between the two aspects of the data. One of the aspects qualify geo-social relationships as being “among geographical areas” and the other, as being “among individuals at discrete locations.” While the visualizations generated are similar for both sets, for the latter,it requires additional computational processes for exploring the “spatial-social human interactions focus on developing quantitative representations of human movements.”
- For the core task of decision making, spatial data analyses exploit spatial locality to study trends, and relate them to explanatory covariates such as demographic data. Network analysis grows in a bottom-up fashion, whereas geographical analysis tends to be top-down. Integrating both can be done using linked methods where individually the visualization techniques cater to specific needs of both the analyses.
Some of the future research directions are towards:
(a) developing theory, methods, and tools, integrating the two separate aspects of the data;
(b) understanding the dynamics of geo-social relationships and processes;
(c) integrating ideas in cognitive sciences supporting geo-social visual analytics; and
(d) developing new geo-visual analytical methods for the three core tasks.
Virtual Reality for Geovisual Analytics has been studied by Moran , who have proposed that one could separate the preprocessing step of data modeling and analytics from the visualization processes. The visualization process can then be implemented in a virtual reality platform so that the user gets better situational awareness and can use natural user interactions for further analysis.
While the visual representations are approximately the same in immersive as well as non-immersive visualizations, we find that the user interactions for analytical tasks are improved in immersive geovisual analytics. The five tasks in the visualization workflow include navigation or exploration; identification and selection; querying and filtering; clustering; and details-on-demand. Navigation is achieved through fly-through cameras with change of perspectives, where the user can virtually navigate through the life size virtual model of the space. Depending on the approach of the user to objects in the scene, zooming can also be used.
Identification exploits the differences in rendering different geometric primitives for the visual representation of the data, e.g. shape and color can be used for “picking” features of interest. Filtering and querying can be provided using a virtual keyboard, and menu options in the Graphical User Interface (GUI). Clustering and pattern matching can be done by virtually overlaying or stacking different layers pertaining to the data, e.g. height of the data with population. Details-on-demand can be implemented using the levels of organizing data, and drilling down deeper as per requirement. In an immersive virtual reality, the user can a directed 3D arrow to enable direct interaction with the scene and enabling movement as required.
The age of the area of geovisualizations is not certain. There are references to visualization in paper as early as Philbrick (1953). This observation has been made by MacEachren and Kraak (1997).
|
OPCFW_CODE
|
Starter project summary :
We are looking for experimented developers to build a custom Android launcher, with Kiosk secure mode limiting access to a specific signature APP and including internationalization in English and French.
**** WE ARE ONLY LOOKING FOR ANDROID EXPERTS WITH PREVIOUS LAUNCHER AND SYSTEM CODING EXPERIENCES **** DO NOT BID IF YOU CAN NOT ACHIEVE THIS JOB IN 45 DAYS OR IF YOU HAVE ANY DOUBTS ABOUT HOW CODING IT ****
**** Our internal team coding time expectation is between 12 and 15 days for this project ****
Our specific hardware where this launcher will be deployed is already rooted and varies from Android 4.4.4 to Android 8. This launcher will be configured as the « owner of the Android tablet" for extended privileges and system requested operations.
All developments must be done with the native JAVA SDK Android technology and build on the predefined [login to view URL] attached files. No specific language, no code generator or cross platform compilers …
All your code should be structured, documented and non redundant. Your code will be rebuild under Android Studio by our development team before signing it to publish it on Google Play store. Our development team will check coding quality and pertinence of your comments.
TheAPK to develop includes :
- A boot and welcome screen, an unlocking screen, a setup screen, an upgrade screen, a log screen, a Test Internet screen, and scripting screen.
- Kiosk mode operation, starting ou Signature app at boot time and watching for it every 5 seconds to restart it in case of cash or exit.
- Local scripting operation for maintenance and software upgrade.
- JSEC API call in background for registering, logging and keep alive messages
Our development team will provide :
- the staring APK source package (attached file),
- the French translation of XML files,
- the mandatory server side https JSEC REST API with English documentation,
- a fully working testing environment,
- the Signature application to monitor
- every day support in case of questions or doubts during your development,
- good practice development and documentation rules…
Once accepted, we will do the Google Play submission of the Starter.
Payments will be done following these rules :
- 15% after the visual approval of all screens : delivery = compiled APK with all screens available (10 days for delivery this APK after project attribution)
- 35% after beta APK testing including all requested features : delivery = first fully working APK + source code (30 days maximum after project attribution)
- 25% after listed problem corrections and final fully documented source delivery + Google Play store deployment acceptance (45 days maximum after project attribution)
- 25% after 45 days of production without problem to validate the stability of your development (final corrected source code delivery + 45 days)
|
OPCFW_CODE
|
Hey, and Welcome to Scratch Games day 8. Today you’ll create a side-scrolling game whilelearning about if-else statements. You might be familiar with recent side-scrollinggames, like Helicopter and Flappy Bird. You’ll notice that in these games, the sprite onlymoves up and down. The illusion of moving forward is created by moving the background.
To create today’s game, you’ll also learn about if-else statements.
“If-else” statements are very similar to if statements. Remember, an “if” statementreads like this:An “if ELSE” statement reads like this:“If-else” statements are useful because they let you specify a default action.
Watch how “if-else” statements are used to program video games.
In “Tetris,” the best-selling video game of all time, the user must manipulate fallingshapes to arrange them into rows. IF the user presses the down arrow, the falling shapesfall down really fast. ELSE, they fall down at the regular speed. An if-else statementis useful here because the falling shapes need to do one action all the time, but switchto doing another action when something specific happens.
In Plants vs Zombies, the user must defeat waves of zombies using plants with specialpowers.
IF there is a zombie near the scaredy shroom, it cowers down and doesn’t shoot.
ELSE, it continuously shoots at the zombies. Just like the creators of those popular games,you’ll use if-else statements to build your project today. In this game, a sprite is controlledusing an if-else statement. For example, If the space bar is being pressed, then go up.
Else, go down. Before moving on to the next screencast, clickand open the starter project link next to this page and add a player sprite. You maywant to choose a sprite that looks like it’s flying, like a parrot, bat, or butterfly.
Once you’ve completed this screencast, move on to the next screencast to learn how tocreate a scrolling background.
"Tetris - NES Gameplay" by NESguide.com (https://www.youtube.com/watch?v=CvUK-YWYcaE) -- Licensed by Creative Commons Attribution-Share Alike 3.0 Unported (https://creativecommons.org/licenses/by-sa/3.0/deed.en) -- Video trimmed to needed length | Audio removed | Cropped on edges | Video scaled up, and blurred in background
"Plants vs Zombies 2 : It's About Time! - Pyramid of Doom - Level 107 (IOS) Gameplay Walkthrough" by Captain Hack vs Zombies (https://www.youtube.com/watch?v=lO9sjCBo9m80) -- Licensed by Creative Commons Attribution-Share Alike 3.0 Unported (https://creativecommons.org/licenses/by-sa/3.0/deed.en) -- Video trimmed to needed length | Audio removed | Video cropped on edges | Video scaled up, and blurred in background
"Flappy Bird Gameplay on iOS" by Cammygirl192 (https://www.youtube.com/watch?v=As-5sPilIx0) -- Licensed by Creative Commons Attribution-Share Alike 3.0 Unported (https://creativecommons.org/licenses/by-sa/3.0/deed.en) -- Video trimmed to need length | Audio removed | Video scaled up, cropping edges and blurred in background | Black border added around video
|
OPCFW_CODE
|
Clean Code: A Handbook of Agile Software Craftmanship by Robert C Martin
Publisher: Prentice Hall
Over the past 10 years, I've learned a lot from Robert C. Martin's writings. His previous book was a rewarding read and I had high expectations on Clean Code. It covers a vitally important area: the quality of our code matters. Any book that manages to teach us how to develop clean code is mandatory reading in my world. Unfortunately, I cannot put Clean Code in that category.
If you're programming in a language that doesn't limit your creativity to message-passing object-orientation, isn't statically typed, and doesn't go by the name of Java, then this book is of little use. Obviously, fundamental topics like naming, cohesion, and abstraction transcend Java. The chapters covering those topics unfortunately do not; the bulk of the writing addresses problems either inherent in the Java language or problems that simply do not exist in other languages or paradigms. The authors never acknowledge this. Worse, due to all accidental complexity of Java, it's simply a bad choice for communicating and teaching coding skills.
So, the scope of the book is more narrow than you would expect from the title or the back cover. And even if you consider it as a pure Java book, it has some limitations. I'm a big fan of Michael Feathers writings. And indeed, his chapter on error handling (a topic that gets way too little attention in most programming books) is well-written. I just wish he'd go deeper; The chapter focuses almost exclusively on exception handling and, according to me, exceptions != error handling. When it comes to error handling, you have to take the perspective of the user, which is what should drive your error handling strategy; not what's technically comfortable. Overall, too much code in this book falls into the so common catch-and-log trap. I mean, logging an exception and continue may be a convenient way for you to debug your software but it will not impress me as a user of the application. Many programs take that somewhat narrow and technical perspective. There's definitely a need for an in-depth coverage of real error handling strategies.
Clean Code is a team effort, although that fact is well-hidden on the book's cover. Robert C. Martin has only written a part of the book. The rest of the chapters are contributed by his Object Mentor colleagues. Like many other books where different authors contribute their own chapters, advices are repeated from chapter to chapter and the authors may even contradict each other. Some of the worst examples include class names ending with
- and this in a book stating the importance of good naming - others use checked exceptions despite a chapter recommending against their usage. At times it gets unintentionally funny; the stack presented as an example on high cohesion
a cohesion problem! Sure, it's a subtle one, but if you write a book called Clean Code I expect you to know and address issues like this.
If you've been following along, you could probably feel my disappointment with the book. Still, if you're a Java developer the book does have a lot to offer (parts of it may also be applicable to C# and other relatives). However, if you really want to improve your coding (and I'm talking about going beyond the seriously limited design space of Java) there are better books waiting to be read. I would definitely recommend spending time with SICP instead. A quarter century old, it's still the best example of truly great code I've come along; it's lessons and principles are timeless and applicable way beyond any contemporary technological fad.
Reviewed February 2009
|
OPCFW_CODE
|
Python is one of the most widely used and versatile programming languages — but where do you start? Fortunately, there are plenty of great resources out there for learning Python, and getting started with some simple projects can help build your skills. In this guide, we’ll give you some fun and simple projects that will develop your coding skills, help you dive into the Python language and give you a good foundation to build on
Why Learning Python Is Worth It
If you’re interested in programming, Python is a great place to start, and learning Python can take you far in the programming world. It is a high-level language with a low barrier to entry and is often touted as a great option for beginners who want to learn to code.
One of Python’s biggest advantages is its versatility. Python is used in applications, web development, game development, data analysis, machine learning and much more. There are even Web3 programming languages that are similar to Python. Once you have a solid understanding of Python, you’ll have the skills to tackle a variety of projects.
Another big advantage of Python is its active community. The Python community provides tons of online resources and support that make it easy for beginners to learn and find answers to their questions. Python programmers can easily find free tutorials, books, forums and more to build their skill sets.
Finally, Python is very user-friendly. The syntax makes code writing and reading much easier compared to many other programming languages. This clear and concise syntax also makes it easier to debug and maintain Python code
Fun Mini Projects Using Simple Python Code
Now that you have a better idea of the benefits of Python code let’s dive into some of the projects you can start doing as you learn. These mini-projects are not only fun to build, but they provide skill-building lessons for different areas of the Python programming language. Without further ado, here are some of the best projects you can make with simple Python code
Building a basic calculator is a perfect example of how easy it is to create simple yet functional programs in Python. The built-in mathematical functions in Python make it easy for beginners to create a calculator. By writing just a few lines of simple code, new programmers can make their calculators perform simple operations like addition, subtraction, multiplication and division. As you learn more coding skills, you can add more advanced features like percentage calculations or square root operations.
Creating a calculator is an easy and fun way to start your programming journey. It’s unlikely that you’ll run into many frustrations, which can sometimes be a barrier to those who are just starting to learn to code. To make things even easier, here’s an example project to get you started.
Dice Rolling Simulator
Another great beginner project is creating a simple program that mimics rolling dice. The concept of the project is simple — the program generates a random number between 1 and 6, similar to a dice roll. This is an interesting project for beginners because it teaches you syntax and introduces you to Python’s random module.
Python’s random module is a pre-written code library that allows you to easily generate random numbers. Python has several open-source libraries and modules that can help beginners build programs and enhance their code.
This project will help you build on the skills you started learning in the calculator project. By building this dice-rolling program, you’ll be well on your way to building more complex projects! Here’s an example you can try.
A password generator can be a fun and useful tool to build using Python. A simple password generator should be able to create unique and random passwords using a combination of letters, numbers and special characters. This project shouldn’t require any overly complex algorithms, so it’s great for beginners.
Python gives programmers access to built-in functions for generating random strings. There are also built-in functions for understanding user inputs, which makes building a password generator even easier. Programmers can use Python’s random module to generate a password based on prompts like the desired length, the types of characters to include and more.
After completing this project, you should know how to use the input function, how to work with strings and random numbers and how to use if statements. This will help you gain confidence in your programming skills and get a useful password generator along the way! Here’s a guide to get you started.
Beginner Data Analysis Using Pandas and Jupyter
Building applications isn’t the only thing you can do with Python. Python is also great for data analysis, especially when combined with the Pandas data analysis library and the Jupyter data science tools. Pandas and Jupyter make it much easier to experiment with data and explore it.
First, you’ll just need to download Pandas and Jupyter. Once set up, you’re ready to start exploring their capabilities. Let’s take a look at some of the data analysis tasks you can do in Python with Pandas and Jupyter.
View a Dataset
One of the best ways to start using Pandas and Jupyter is by exploring a data set. You can use a pre-existing data set and import it into your code to get started. There are plenty of free data sets available online for your data analysis practice. You can even choose a topic you’re interested in to make the project more engaging
You can easily use Pandas to load your data and display it in a table. Once the data is loaded, Pandas also gives you tools for cleaning and manipulating your data.
Once you have a good understanding of your data set from viewing it, you can start to explore and analyze the data in Pandas. Another great tool for viewing your data set is Mito, which you can use in Jupyter to view data sets in a visual spreadsheet.
To get Mito set up in JupyterLab or Jupyter Notebook, you can follow these steps for setting up Mito. Once you have Mito integrated with Jupyter, you can import multiple tabular data frames from multiple sources into Mito, allowing you to work with and manipulate multiple pieces of data simultaneously. You can follow these steps for importing your data.
Create Pivot Tables
One of the most powerful features of Pandas is its ability to create pivot tables. Pivot tables allow you to summarize and group your data in many different ways, giving you valuable insights into trends and patterns.
Those familiar with Excel may have used pivot tables for analyzing and summarizing large data sets in spreadsheets. In Python, creating a pivot table is just as simple, and it’s a great way for beginners to get started with data analysis. If you're used to doing “group bys” in Pandas or using Excel Pivot Tables, then you can click through to learn how to use pivot tables in Mito.
Graph Your Data
Python also has numerous libraries for creating graphs. Based on your data, you can choose the best graph type, such as bar graphs, line graphs, scatter plots and more.
With Python, you can build graphs that best communicate your data, such as building a line graph to show stock performance over a certain period. Once you choose your type of graph, you can add labels, choose different colors and styles, add a title and more. This makes it easier for others to understand your data.
Graphing data is often a crucial component of data analysis projects. Seeing patterns and trends can be easier when you have a visual representation of your data. With Python and its various libraries, it can be simpler than ever to create graphs for analytics and visualizations.
Mito is a great tool for graphing as well since graphing in Mito allows you to build intuition about your data and easily create presentation-quality graphs. Using the Plotly Express open-source graphing library, programmers can build interactive and customizable graphs in Mito. Mito also automatically generates the equivalent Python code when you create a graph, giving you fine-tuned control over your data analysis. Click through to learn the steps for using Jupyter and Mito to create graphs using the data you pulled from your data sets in the projects we outlined earlier.
Explore and Analyze Your Data
After gathering and importing your data into Jupyter and Pandas, it’s much easier to explore and analyze your data. You will be able to find various tools and functions that allow you to gain insights and discover more about your data.
The aforementioned graphs and charts are a great place to start, as you can have visualizations that help you read your data and identify trends and patterns. The describe function can also help learn statistical metrics about your data, giving you a good foundation for understanding your data.
Once you have a basic understanding of your data, you can start to dive deeper and explore the relationships between different variables. The pivot tables we talked about earlier are a great way to manipulate and compare your data in a convenient and user-friendly environment.
For any data analysis project, it’s good to ask questions and be curious about your data. Finding unexpected insights and correlations can be exciting, and the key to finding these insights is through careful analysis and exploration. When you combine Python with tools like Pandas, Jupyter and Mito, becoming an expert in Python data analysis has never been easier.
Mito Makes It Easier to Use Python for Data Projects
If you’re planning on using Python for data projects, make sure to try Mito. Mito is a Python-based spreadsheet app specifically built for Python data analytics. With Mito, you can easily explore and edit data just like you would in Excel or Google Sheets.
Mito also enables automated spreadsheet workflows to generate Pandas code in real-time to make analysis even easier. Combining Mito with other tools in the Python ecosystem can make your data analysis toolset even more powerful.
Those who are just learning Python code will love Mito as well. If you’re used to Excel and Google Sheets, you’ll find that using Python with Mito is a much better way to analyze data. Ready to learn more? Install Mito today!
|
OPCFW_CODE
|
#ifndef QSTR_HPP
#define QSTR_HPP
#include <string>
#include <type_traits>
#include <QString>
/// Convenience function for calling `QString::fromUtf8()`.
inline QString qstr(const char* cstr) {
return QString::fromUtf8(cstr);
}
/// Convenience function for printing addresses of pointers.
inline QString qstr(const void* vptr) {
return QString("0x%1")
.arg(reinterpret_cast<quintptr>(vptr),
QT_POINTER_SIZE * 2, 16, QChar('0'));
}
/// Alias function for converting `std::string` to QString.
inline QString qstr(const std::string& str) {
return QString::fromStdString(str);
}
/// Convenience function for calling `QString::number()`.
template <class T, class E = std::enable_if_t<std::is_integral<T>::value>>
QString qstr(T x, int base = 10) {
return QString::number(x, base);
}
#endif // QSTR_HPP
|
STACK_EDU
|
A bundle as described above: PyCharm 2.0.2 and Svn 1.6 via Apache HTTPD mod_dav and mod_dav_svn.
If I write in console something like svn co https:/ /server/path/to/repo, it asks me for my certificate (asks to enter the path to the PKCS#12 file), the passphrase from it, then the name and password for HTTP Basic Authentication on the server, and then successfully pulls the working copy (including svn:externals that are there). From the point of view of the server it looks like this:
126.96.36.199 - - [07/Mar/2012:21:52:36 +0400] "OPTIONS /messaging HTTP/1.1" 401 464
188.8.131.52 - merlin [07/Mar/2012:21:52:41 +0400] "OPTIONS /messaging HTTP/1.1" 200 189
184.108.40.206 - merlin [07/Mar/2012:21:52:41 +0400] "PROPFIND /messaging HTTP/1.1" 207 649
If I try to access this same repository of PyCharm, in the first configuration (File->Settings->Version Control->Subversion), I click Edit Network Options and specify the path to the certificates.
Then, VCS->Checkout from Version Control->Subversion, a window opens "SVN Repository Browser". I add the path to the repository, trying to indicate I needed a branch and I say: "svn: OPTIONS of /messaging: 403 Forbidden (https://server)",
and in the server logs appears this:
220.127.116.11 - - [07/Mar/2012:21:04:54 +0400] "OPTIONS /messaging HTTP/1.1" 403 274
And that's all. In documentation
it is written that the idea PyCharm I had to throw out a window with login and password; it does not eject, apparently, because he is not, the server responds 401 and 403. And why he responds with 403 — not clear (maybe because PyCharm does not send him a certificate? Because the passphrase I have not asked).
How would it overcome?
|
OPCFW_CODE
|
The script often spoken and written by people involved in the spread of good practice goes along the lines of "we need to customise the process / protocol / idea so it fits best in our context", or "we need to expect the process / protocol / idea will be customised".
Part of me fully support and understands this. Yet another part of me is questioning what we mean by adaptation. When we use the term is it because:
- we didn't have the time and/or inclination to discover the important contextual variables and then design with and around these
- we are so in love with our solution (see earlier post about "inventoritis") that we expect others to copy it as it is, or maybe with just a few small tweaks
- we are too afraid to work through the adaptation process and how the solution might be adapted because we may discover the desired outcome may not be achieved
- we can't figure out how another place or team might use the process or idea so we defer to adaptation as the way round this
- we know the new process will require quite a lot of facilitation and support to make it happen so we use adaptation as a means for engaging others (so they don't think they are adopting someone else's idea) and as a means for garnering implementation support
- we can spread partly formed ideas and processes, or ones still in their innovative design state
So what is the adapting process? In a foreword by Richard Dawkins in Susan Blackmore's book about memes, is a couple of examples which got me thinking.
- Are you expecting a copying process, knowing there will be some natural adaptation. Dawkins uses the example of copying a picture. One person copies a picture, passes to another to copy and so on. After a number of copies the picture may not resemble the original very much. In fact, I suspect some may start to put their own context, thoughts and ideas on the picture, thus rendering it something different both in visual status as well as in meaning.
- Do you intend someone to copy instructions? If I am shown how to make a complex origami figure using a set of 30 simple instructions, then I can teach someone else, using the same instructions. That person can then teach someone else and so on. In this case, most of the time, we can posit that after 20 teaching/replications the origami figure would look the same. By focusing on the instructions then someone can even correct a minor slip when they make their copy. However, if once of the instructions gets left out and this omission is replicated then the paper figure will end up an entirely different shape.
So this brings me to issuing clinical guidelines and the expectation of their adoption and use, and sometimes adaptation for local use. Some questions I have are:
- Do we know what happens when we issue guidelines and say "may them local". To what extent do they match the fidelity of the original in terms of outcome?
- What happens when one of the guidelines instructions is omitted (accidentally or purposefully)? How much of the original outcome is retained?
If you have any thoughts on this topic of adaptation them please comment or email me.
|
OPCFW_CODE
|
Posting SNS messages to AWS_IAM authenticated Api Gateway endpoint
I've created SNS topic
I've created API Gateway endpoint that invokes Lambda function
I've created topic HTTPS subscription that points to API Gateway endpoint
Problem: everything works fine when AUTH=none, but when i enabled AUTH=AWS_IAM, neither subscription nor messages are delivered to my lambda. They also wont show up in Lambda OR Gateway cloudwatch logs as it's usually the case with authentication errors.
Questions:
What's the identity delivered by HTTPS endpoint to AWS_IAM so it doesn't allows it ( my first thought was to relay SNS posters token but it doesn't seem be the case )
I couldn't find any way to associate HTTPS endpoint with any identity, is there a way?
There are lots of information about delivering SNS to SQS or Gateway to SNS, but couldn't find any information about achieving what i try to do.
Is there any method to debug AWS_IAM authentication problems? Documentation i've seen advices to "check priviliges" which is something i've been doing for many hours but i have no more ideas.
I'd be glad to hear any ideas from you, thanks.
How did you subscribe the API Gateway endpoint to the SNS topic...I have read the docs (http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html#SendMessageToHttp.prepare) but didnt understand much.
The same way you POST your messages to any HTTPS endpoint. You just need to add extra code to confirm subscription based on event.Type, then you need to trigger GET event.SubscribeURL. Sorry for missing your comment.
As you may have seen in the docs, SNS can only do Basic/Digest Auth http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html
There is a section in the docs about verifying the validity of the message but that is code you'd have to write yourself or lift from one of the SNS SDKs on the backend. There really isn't any way to get SNS to sign the request with AWS SigV4, unfortunately.
Why don't you let the Lambda function subscribe directly to the SNS topic (without going through API Gateway)?
That should be straightforward: https://docs.aws.amazon.com/sns/latest/dg/sns-lambda.html
That's most definitely a valid approach, idea behind that was to have rest-only api open for any kind of client (not necessarily aware of aws). Unfortunately on this stage i cannot alter that design.
After reading the documentation for SNS again: https://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html I seriously doubt that SNS can send IAM authenticated HTTPS POST requests to API Gateway endpoints. The docs talk about Basic and Digest Authentication only, but IAM is something different. Another pointer is that there is no way to configure a principal for SNS POSTs. (Also, going through API gateway incurs additional costs that are unnecessary if the "client"/publisher is SNS. )
Here is the complete link which will help you in solving your authentication problem. https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
If it's an "Check privileges" issue, then your IAM user doesn't have any sufficient access to the resources to make any changes.
I've seen that link and other parts of my API work fine. The only problem is that SNS HTTPS subscription doesn't forward credentials of user who sent SNS message. In other words i don't know what is my user, or how do i assign user to HTTPS subscription.
|
STACK_EXCHANGE
|
I'm asking this dilemma since the Paraguay consulate right now advised me that visa starts through the working day of problem where situation I have a difficulty with Peru. Hope you could respond to this.
.. In keeping with what was pointed out earlier mentioned, I usually do not want any other files as an alternative to my Philippine passport and environmentally friendly card. Is this correct? If that's the case, the place in producing am i able to i offer proof when I encounter the immigration in the airport?
A standard form of insurance by which the health plan will both shell out the clinical supplier straight or reimburse you Once you have submitted an insurance plan claim for each lined professional medical cost.
Truthfully I do not know both. The airport states on their own Web page about travellers in transit the following :
Slovenian people haven't got to submit an application for a visa at a Peruvian Embassy or Consulate ahead of moving into the place. You have an entry stamp in the airport that allows you to continue to be the amount of days created on it by hand with the immigration officer.
Getting a visa straight from Peru is in my view an not possible mission. I feel you only have two possibilities: either get in contact with the organizer with the party inquiring if any Unique preparations happen to be created to resolve the visa troubles some awardees confront.
Hi I reside in Kazakhstan and I`m a citizen of the state. I want visa but there isn't any embassies or consulates of Peru in Kazakhstan. In which I should implement? the closest is in Russia. Am i able to apply while in the US for visa becoming a touris there?
Just in case you will be able to apply for a visa for Peru, remember that It truly is isn't going to make a superb perception getting overstayed your visa for 8 thirty day period Overseas. This example may possibly have an effect on how the Peruvian authorities come to a decision an your visa software.
wikiHow's mission is to assist folks master, and we actually hope this article assisted you. Now that you are aiding others, just by checking out wikiHow.
My wife incorporates a Japanese passport that expires the end of May possibly. We had been hoping to head over to Peru April - Could 4, but I just noticed that passports ought to be valid for six months from departure.
Is there any on-line visa kind the place I'm able to fill it and print it and have with me? I am a great deal nervous that Ive had a wrong perception that Brazilian citizens wont require a visa.
I do think peru embassy is exact corrupt as pakistani corrupt in order that they never ever response of any data as i send A great deal e-mail
Buenas tardes, tengo nacionalidad peruana y estadounidense, con apellidos diferentes (US pasaporte y pasaporte peruano) por matrimonio, mi pregunta es si quiero regresar al Peru x unos anhos como hago para entrar con el pasaporte peruano si sali de Estados Unidos my company con el pasaporte americano? es posible o no?
While in many nations two cost-free pages inside a passport really are a general prerequisite when implementing for your visa, on the website of the Peruvian consulate in britain ("") I am unable to discover just about anything about it.
|
OPCFW_CODE
|
Domino Online Meeting Integration (DOMI) had the following goals:
- Provide support to create, update and delete ad hoc meetings for:
- Microsoft Teams
- Use modern APIs rather than talking to the vendors’ desktop applications.
- Work via mail template modifications rather than HCL Notes Standard Client (Eclipse) plugins.
- Support Sametime Meetings ad hoc meetings.
- Support Verse.
There is no limitation on Domino server version. However, to use DOMI there is a minimum Notes Client version - 11.0.1 FP3 or 12.0.0. A messagebox will warn if an earlier version of Notes is used.
GoToMeeting requires a paid account. Free accounts do not have access to run GoToMeeting REST APIs and an HTTP status code 500 will be returned.
The phase 1 delivery has the following limitations:
- No multi-lingual support.
- No support for repeating meetings. Domino’s flexibility for how meetings repeat is not available in other meeting services, so the “right” approach is unclear.
- Tokens need manually copying into Notes Client from a web application. OAuth process requires loading a browser and receiving either a code as a query string parameter for the meeting services or a header parameter for Sametime. Notes Client functionality can be called via
notes://protocol (Notes Client) or
notes+web://protocol (Nomad Web). However, the protocols currently ignore anything other than the base URL. Until that changes, it will not be possible to trigger an OAuth process from Notes Client without requiring the user to copy content back into the Notes Client.
- Tokens are not encrypted, but the Online Meeting Credentials form has a Readers field for the author, mail database owner, LocalDomainServers and [DOMIAdmin] role. The [DOMIAdmin] role can be leveraged for support purposes, if desired. The assumption is that PAs will create the Online Meeting Credentials form on behalf of their manager. If this is not your corporate policy, an additional role can be added to the form to expose it to PAs as well as the mail database owner. For group calendars, the assumption is that the “group entity” will not have its own account with the meeting provider, but individuals will use their own credentials.
- Token will need refreshing periodically. Scheduled agents can only be scheduled for a specific time, not a particular interval from deployment. Consequently, using a scheduled agent would place excessive load on the web application and could give the impression to third-party services of token harvesting. So the phase 1 approach has been to refresh programmatically when creating a meeting. If the refresh token has expired, the user will need to re-initiate the OAuth dance, and functionality has been added to support this.
- Zoom is the only provider that supports revocation of tokens. So this is the only Online Meeting Credential type that has a Revoke Token button. Customer testing will inform wether or not this is sufficient for a real-world implementation. It is possible, though by no means certain, that a more regular refresh might be required. But because a perfect approach is not obvious, it makes more sense to gather more information around usage before guessing on a better option. Metrics are gathered for requests for tokens, which will provide some background on throughput.
|
OPCFW_CODE
|
Hello everyone, if you're interested in playing as an adventuring chef, check out my Chef class. This is the first homebrew I've done in a long time, but a concept that I have always been interested in. This is the second iteration of the chef, updated from its initial post on r/DnDHomebrew. Only a small amount of playtesting has been done so far. I'd love to here any and all feedback you have. Be as critical as possible, I'm willing to defend my design choices and also change them if thoroughly convinced. Also for any real life gourmets out there, I have no idea about anything related to the culinary world and had to google a lot of this stuff. Please let me know of any inaccuracies.
I love it! This is what I always wanted to play as, thank you so much!
Edit: Okay, after further reading...
This... Is... Amazing!
This class works as 50% support and can be further enhanced to work as a Damage Dealer/Frontline Tank. I love the effects of the recipes as well as the ability to use Utensils as weaponry (Although I hoped that you could use a pan as a shield, coupled with a butcher knife that replaces a short sword, but no matter).
I hope to try it sometime in the future.
I really love the idea of the class, I've been looking everywhere for a while now to find a well made homebrew for a D&D Cook and it looks like I found it !
I have just one concern with it: I'm prepping to play a cook at low level, and it seems like I won't have more than 4 or 5 recipies a day before hitting level 4 or 5, and without any cantrip-like abilities, I feel like the Chef will run out of steam pretty quickly and just be reduced to hitting things with his pot for the rest of the day. It might depend on how many combat encounters your DM builds in an adventuring.
I think the idea of it is really fun! I might try it some day if I ever get to play as a PC.
There is one major thing which can be improved though:
When I read about the recipes in the Recipe list, the first thing I want to read is how it works mechanically, not the background to how it is cooked. Just by giving it a quick glance I should already know what it does mechanically in the first sentence.
Keep the sentence about how it is cooked, it's a fun side note. But have it in the end of the description to avoid irritation.
NPCs in Town
delving the dungeon
A subreddit dedicated to the various iterations of Dungeons & Dragons, from its First Edition roots to its Fifth Edition future.
|
OPCFW_CODE
|
SAP Ariba API – FAQ and Best Practice on Developer portal and Gateway
In this blog, I will be discussing the best practice and FAQ to the SAP Ariba API Developer portal and gateway. I will briefly lists the basic overview of SAP Ariba Developer Portal and lists several key process to be considered in order to avoid the common API issues related to SAP Ariba Developer Portal and API gateway itself.
⚡ If you are familiar with the Developer Portal or this blog is TL;DR, you can skip to the Access Token and Rate Limit validation section, which I think is a must read for API developers.
I will divide this into Two sections: Developer portal and API Gateway.
- As of October 2021, There are several SAP Ariba Developer portals divided into region: United States, Europe, Russia, China, Kingdom of Saudi Arabia, United Arab Emirates, Australia, and Japan. Customer access will depends on which API are going to used and also where the customer realm(s) resides.
- Ariba Network APIs and CIG are only available from United States Developer Portal.
- Customer access are given with registration to the respective portal, with Customers being classified as organizations within the Developer portal. There are two types of users in customer organization: Administrator and Developer users. Customer organization must have a minimum of one administrator user, but additional administrator and developer users are allowed. Typically, Designated Support Contact (DSC) will be assigned the role of administrator users.
- Customer organization’s administrator role is to maintain the users of their organization, along with finalizing any API application requests from the developer users by submitting the API application to SAP Ariba for approvals, and when approved, generating the client secret for API consumption. Administrator users can also delete existing application(s) within the customer organization.
- Ariba Network API typically require ANID and the API use will involve configuration within the Ariba Network Admin page. APIs related to specific realms will either be auto approved or enabled by API support team. Some would need to be configured via Intelligent Configuration Manager (ICM) by customer realm administrator Prior to requesting specific API, customer organization must have their valid realm(s) listed with proper name and ANID in the Developer portal.
Frequently Asked Question on Developer Portal:
Q: Can I hire an outside consultant to manage my organization/apps?
A: Initially, only Designated Support Contact (DSC) can be set as administrator user of customer organization. After the administrator has been set up, they can invite members to customer organization and later designate the new member to be an administrator of customer organization. This decision is made within customer organization’s discretion, not SAP Ariba. Non-administrator users (developers) within the organization can only create API applications and request for which API the application is for. Activities such as whitelisting IPs, requesting for API approvals, and secret key generation have to be done by the customer organization administrator users.
Q: I’m just testing the API as a partner or consultant, can a customer’s realm be added to my organization?
A: No. API application(s) must be created within customer’s organization. Access to the app can be shared by the administrator user of the organization to you as their partner or consultant. To their discretion, they can either add you to their organization as regular member, administrator, or simply sharing with you the secret key of the app. This is because the customer is in control of the API access to their realm(s) whether it is their test, production, or development realm.
Q: Can we increase the API call limit for a specific application?
A: The SAP Ariba Customer Support team does not have the ability to increase the rate limit for an API or application. As of 2023, most of SAP Ariba APIs are designed as a back-end integration, not a front end one. Data retrieval should be stored in your local database, not a direct call showing real time data on the fly and repeated for different users/UIs. You might have to spread your calls over some days or weeks if you have a lot of data, so plan for this accordingly.
Depending upon your integration design, your account executive can reach out to the product owner and approval to increase call limit will be reviewed on a case by case basis. Rate limit existed to ensure that your nodes performance are not affected.
Q: Should there be a different API application created for both test and production?
A: Yes. There is a policy which we follow – One application for one realm and one API, which includes test realm. Every application is mapped for a particular API using the application apiKey. So to prevent an authorization error, use different applications for different API.
Q: I completed my development in my test realm, can I migrate the API application for my development realm?
A: No. Separate API Application must be created for your Production realm. The only changes to your development code would be the apiKey in the header portion of the API call and the key used to generate the Access Token. On Reporting API, if view being access require a custom field, recreation of the custom view will be required. This is because custom field name would differs between Test and Production realm.
Q: Are we affected when there is a Certificate renewal on Developer Portal?
A: Depending on your setup to consume the API, certificate update on Developer portal might affect your ability to consume SAP Ariba APIs.
New certificate should be installed prior to the expiration date. One week prior to the update, certificate update notification will be shown upon connecting to Connect Portal and a link to the certificate will be provided. New in 2023: Please avoid doing certificate pinning as it will no longer supported
- Access Token validation
Make sure that there is a process in place to handle Access Token validation, this include a process to generate new access token utilizing refresh token and a process that will store timestamp variable to check whether any given token needs to be refreshed prior to making the next/subsequent API call to the final end point. These two processes will avoid 401 error – unauthorized access.
- Rate Limit validation
A process should be made where rate limit available and remaining values(per second, minute, hour, and day) for any given API end points are stored in variables along with the timestamp of the last API call being made. These stored variables must be checked prior to making the next/subsequent API call to the specific end point to avoid 429 error – Rate Limit exceeded.
⚡If you share the apiKey with other teams, these variables must also be updated by everyone who made calls to the API end points using the same apiKey, otherwise your stored variables are not reflecting the real remaining rate limit to the API end point in SAP Ariba API Gateway.
- Header size Limit
As of October, 2021, SAP Ariba API Gateway will only accept a maximum of 4KB or 32kb of data in the Request Header. Complying with this will avoid 502 error – Bad Gateway.Check this blog: How to prevent 502 error with SAP Ariba APIsalso this help manual on Tracing the execution of an integration flow in SAP Cloud Integration
Frequently Asked Question on API Gateway:
Q: Why do I get 403 error: Access Denied. Please contact your Organization admin?
A: This error is returned because the application use the whitelisting IP feature that will block the use of the application unless it is coming from the specified IP addresses. Your organization admin must add the proper IP addresses on the whitelisting IP feature.
Q: Why do I get 400 error – grant type should not be null error?
A: This error is due to the missing grant_type value of openapi_2lo that is required. This value should be spelled out in the body of the request: –data-urlencode ‘grant_type=openapi_2lo’
Q: Why do I get error: You cannot consume this service?
A: The cause of this error is the API application attempted to access API end point(s) outside the API that the application was requested for. In other words, wrong apiKey.
|
OPCFW_CODE
|
#ifndef __FMA_ASM_CONSTANTNUMBEROPERAND_H__
#define __FMA_ASM_CONSTANTNUMBEROPERAND_H__
#include "Operand.hpp"
namespace FMA {
namespace assem {
class ConstantNumberOperand : public Operand {
public:
ConstantNumberOperand(const uint64_t &number) : number(number) {};
virtual ~ConstantNumberOperand() {};
virtual std::string asString() const;
virtual std::string getIdentifier() { return "#"; }
virtual std::string getTypeName() const { return "ConstantNumber"; }
virtual bool isConstant() const { return true; }
virtual uint64_t asConstant() const { return number; }
virtual bool isResolvedAddress() const { return true; }
virtual uint64_t asResolvedAddress() const { return number; }
virtual bool isSymbolReference() const { return true; }
virtual symbol::ReferencePtr asSymbolReference() const;
virtual bool isWriteable() { return false; }
virtual bool isReadable() { return true; }
protected:
uint64_t number;
};
}
}
#endif
|
STACK_EDU
|
Creating a Movies App using Retrofit and Picasso - Part 106 Oct 2015
This post is part of my talk “Getting Started with Android Development” at DEvcon. We will be building a movie app that consumes a REST api using retrofit and displays images using Picasso. I see this project as a “Hello World” on steroids, so you don’t need any Android experience to complete it. However, I do recommend having some programming background using Java or another Object Oriented Programming language. This is part one where we will set up the project and a grid using a mock movie poster. In part two, we will bring in Retrofit and show real movie posters as well as detail information for each movie.
You may download the complete project here
Setting up Android Studio
We’re also going to need either an emulator or a real Android device. Android Studio comes with built-in emulators but unfortunately they’re very slow on most computers. As an alternative, you can use the free version of Genymotion which provides third-party emulators that are much faster than the standard emulators.
If you have any issues setting up Android Studio or Genymotion, either leave a comment on this post or checkout StackOverflow
Setting up the project
In Android Studio, click File > New > Project…
For Application Name, enter the name you would like to give your app. For Company Domain, you can either enter your personal domain, or your email replacing the @ with a dot. e.g.: “jose.gmail.com” This makes your package name unique and your app won’t run into any issues if you decide to publish it to the app store.
Make sure you select the location where you want your project to be created by clicking on the three dots to the right of Project Location.
Once you’ve made these changes, click Next to continue
I’m setting the API level to 16 but you can set yours to any version you’d like. Since we will be using the support library, you can go back as far as API level 7. My rule of thumb is to choose API level 16 for apps that target the US only, and API level 9 for apps that target international users. Lower API levels allow you to target more devices but you cannot use all the features of newer Android versions. API level 16 is Android version 4.1 and API level 9 is Android version 2.3.3.
Once you’ve selected your preferred API level, click Next to continue.
According to the Android Developer Documentation:
An Activity is an application component that provides a screen with which users can interact in order to do something, such as dial the phone, take a photo, send an email, or view a map. Each activity is given a window in which to draw its user interface. The window typically fills the screen, but may be smaller than the screen and float on top of other windows.
Let’s create the activity that will hold our grid of movies. Select Blank Activity and click Next when you’re ready.
You can leave the defaults or change the Title to the name of your app. Click Finish when you’re ready to continue.
In Android Studio, click Run > Run ‘app’(Shift + F10) to run your newly created app.
Not much going on. We have a TextView that says “Hello World.” We want to replace this TextView with a RecyclerView to show our movie posters. Before we can use RecyclerView, we need to include it as a dependency.
Open your build.gradle file. Not the one under your main folder, but the one inside your ‘app’ module. For me, that is PopularMovies/app/build.gradle
Add the appcompat v7 support library and the RecyclerView as dependencies. The support library might already be in your build.gradle, but if it’s not, add it under dependencies: Since we’re already working in the build.gradle file, let’s go ahead and also add Retrofit and Picasso as dependencies.
Adding RecyclerView to your Main Activity
Depending on your version of Android Studio, you may have either a content_main.xml and an activity_main.xml,
or just an activity_main.xml under
app/src/main/res/layout. If you have both, you will need to open content_main.xml and if you only have
activity_main.xml, then open that one. The file should look similar to this:
Let’s replace it with our RecyclerView and give it an ID so we can get a reference to it from the Activity.
Note that we had to use the full package name for the RecyclerView. We will have to do this for any layout or widget that is not part of the Android framework. These rules apply even if the Widget is provided in one the Android support libraries.
Open the MainActivity which should be located under
Let’s get a reference to our RecyclerView
If we run our app, we won’t see anything. In order for our RecyclerView to work, we need to give it an Adapter and a LayoutManager. The Adapter tells the RecyclerView what one single row or column should look like, and the LayoutManager tells the RecyclerView how to organize all those rows or columns on the screen.
Creating the Movie Class
Before we go any further, let’s create a POJO that represents a single Movie. We need this class before we can create the Adapter for our RecyclerView.
Creating the Movies Adapter
Now that we have our Movie class, we’re almost ready to create the Movies Adapter. Before we can create it, we need to create the layout that the Adapter will use to display a row or column of data. All we’re displaying is a movie poster, so our layout will only contain an ImageView for now.
Create a new layout file under
app/src/main/res/layout and name it row_movie.xml. and add an ImageView to it.
The RecyclerView only creates enough Views to smoothly display all the data needed on the screen at any given point. For example, if we have a 2x2 grid that scrolls vertically, the RecyclerView will create a maximum of eight views. The four views currently being shown, and two above and below so that when the user scrolls the views are ready to be shown. As the user scrolls, the RecyclerView uses already created views and binds new data to them.
This is where the ViewHolder comes in. Add the following inner class to your MainActivity:
We’re creating a class that extends RecyclerView.ViewHolder and holds references to the views in our row layout. In this case, just an ImageView. If we have a 2x2 grid, RecyclerView will create around eight views (and ViewHolders) to display any amount of columns and rows we throw at it instead of creating a new view every time the user scrolls.
Now we’re ready to create our Adapter. For simplicity, we will create it as an inner class of our MainActivity. Feel free to create a file a new file which is what you would normally do for a production ready app.
Applying the Adapter to our RecyclerView
We have an adapter and RecyclerView. Now we have to allow them to talk to each other.
There are three built-in LayoutManager’s. LinearLayoutManager, GridLayoutManager, and StaggeredGridLayoutManager. LinearLayoutManager orders your views either vertically or horizontally but only one at a time. GridLayoutManager allows you to show items in a vertical or horizontal grid with an specified column or row count. StaggeredGridLayoutManager behaves just like GridLayoutManager but it also allows the grid items to fill gaps. This is very useful when your items might not be the same size.
We’re using GridLayoutManager with a span of 2 which will make our RecyclerView a grid with two columns. Then we’re adding some fake movies that we’ll replace later with real data.
If we run the app, we still won’t see anything. Our fake movies need to have a valid image when we call
For now, we can modify our Movie class and change the
Movie.getPoster() method to return an image from the internet
Lastly, we need to request permission to use the device’s internet connection. For that, we can add the following line
app/src/main/AndroidManifest.xml right above the
We can finally run our app and see our glorious Guardians of the Galaxy posters:
That’s it for part 1. In part two, we will consume a REST Api using Retrofit and show real movie posters. We will also add another activity to view more details about each movie.
|
OPCFW_CODE
|
Google App Engine Push Task - Using the DeferredTasks instead of a worker service warning
There is a warning about using DeferredTask in the documentation that says:
Warning: While the DeferredTask API is a convenient way to handle
serialization, you have to carefully control the serialization
compatibility of objects passed into payload methods. Careful control
is necessary because unprocessed objects remain in the task queue,
even after the application code is updated. Tasks based on outdated
application code will not deserialize properly when the task is
decoded with new revisions of the application.
I don't understand this. What does it mean "careful control"? Does anyone have an example of how one can write a poor DefeferredTask?
Java serialization follows certain rules that you need to be aware of. By default, any change to a Java class "breaks" serialization; objects serialized with the old class cannot be deserialized with the new class.
If you declare a serialVersionUID in your class (and don't change the value), then deserialization will be allowed even as you change the class. It will do what you normally expect if you're used to serializing to/from JSON and adding/removing fields to/from your classes. Ie, fields removed from classes will leave data ignored and new fields added will be have default values.
Some people hate Java serialization and some people love it. It's useful, and very convenient when working with the task queue. If you always declare a serialVersionUID you'll probably be fine... most mistakes will cause exceptions when you try to serialize data, and you'll figure those pretty quickly.
I see... hmm well in my deferred task I am just doing some datastore operations in it's run() method and that is all. Does this whole serialization thing even apply..?
Serialization applies to whatever data (fields) you have in your class. If you don't have any fields, it's easy! You should still declare a serialVersionUID in case you add fields in the future. And remember the package path and class name can't change (if you want to rename it, copy it and only delete the old version after you've drained the queues).
Ah ok. So then if I specify a serialVersionUID I should change its value with every new project version deployment?
I strongly strongly recommend reading up on the basics of Java serialization. No, you want to never change the value otherwise you won't be able to deserialize old tasks.
Shouldn't I update the value of serialVersionUID if I change the fields of the deferred task?
No. If you change the serialVersionUID, then serialized data with the old ID will not be able to be deserialized by the new class. At all.
If you change fields but keep the same id, then it will work according to some basic rules... fields that stay the same will be deserialized correctly. New fields will be initialized to default values. Deleted fields will just be ignored. If you're used to working with JSON it's basically the same.
|
STACK_EXCHANGE
|
Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Original Redmine Comment
This is probably a few month effort, the big challenge is defining
I'll post more later or on request.
Original Redmine Comment
I gave this much thought some time ago in a CPU emulator I was working on. I'd like to take a stab at this in the near future. Let's make sure I don't completely misunderstand what's meant by "adding an event loop". The following is my understanding intermingled with the proposal.
The key ingredients are:
The @workunit@ is used by the event loop to pace the simulator so that it doesn't grab control for too long. The simulator can adjust its "loop count" or some equivalent to return control after certain amount of @workunit@s was done.
The event loop mostly call's the simulator's @ProcessEvent@ API with various events. Each event includes the maximum desired number of @workunit@s to be done by the simulator while processing the event.
When a @SimulatedEvent@ arrives, the simulator may be either ahead or behind the given @SimulatedTime@ of the event. If the simulator is behind the event's time, it fails an assertion: it is the event loop's job to check the simulator's time and roll back its state should the time be past the event's time. When the event is ahead of the simulator's time, the simulator keeps running the simulation until its time matched the given @SimulatedTime@, then processes the event itself (e.g. a flip of a bit somewhere in its state), then resumes. In all cases, the simulator returns when it has approximately done the amount of work it was expected of it. If it didn't have the chance to consume the event, it shall keep it in its own queue. Those events are a part of the simulator's state.
The simulator can also call APIs of the event loop. This is mostly @postevent(SimulatedTime, ...)@. It's up to the event loop to decide how to consume those events, i.e. they need to be "connected" to some receiver(s) (e.g. a GUI, a log file, ...).
Note that the simulator can advance @SimulatedTime@ arbitrarily far - this is the equivalent of, say, waiting for a timer to expire. What's important is that the simulator do a certain amount of work and return. When the event loop detects that there are some events for the simulator that happened at a prior time, it rewinds its state sufficiently far back, then issues the events to it.
Since @simulatedTime@ is in real physical units, the event loop can correlate it with the timestamps of realtime events such a UI interaction, or with timestamps of other simulations.
Since there's effectively no limit on how far into the future the events may be, it's possible to have fairly complex rendezvous scenarios. For example, suppose you have a verilog simulation of two Hayes 9600 modems, with a serial interface on one end and an audio/phone line on another. You want to test sending a file using ZModem, implemented in C++ in the bench that runs the two simulations. Since ZModem can have large windows, there'll be large chunks of simulated time where serial line's "newByteToModem1" events will be provided quite a while into the future. Thus the simulator thread for the "file sender" modem can run largely uninterrupted even if the modem itself doesn't implement a large queue in its verilog modem: the simulator itself does the queueing. If the test bench/harness wants to, for whatever reason, issue a line signal loss - whether in the future or in the past - it can certainly do so by dropping/altering the "analogData" events travelling between the modems.
I've been using a scheme similar to this one to run multiprocessor embedded hardware simulations, with the simulations written in C++ not Verilog, but they were cycle-accurate nevertheless, and it wasn't hard to separate the peripherals from the processor core: they'd run independently as much as possible, and only rendezvous when needed. In my implementation, the event loop didn't hard-rollback the simulator when it had an event in the past: it'd instead provide the event and the handle to a snapshot that predated the event, and then the simulator could decide whether the event would alter the state. Suppose that a serial line input came, but the CPU happened to have that UART disabled between the rollback state and the current state - no need to roll back even though the event was in the past. The simulator would keep a state counter for peripherals, incremented each time the externally visible state had changed, and would know that no state change occurred between state snapshots when the counter was the same.
The simulators were also informed if any events had no subscribers: in such case, even if the state of the UART might have changed, it was invisible, and thus the "state counter" of the peripheral could remain static. Those were micro-optimizations but had good results.
I would imagine that an approach that addresses these problems would be useful for Verilator, but I'm not necessarily suggesting doing it in this particular way. I do believe that performance demands the ability to minimize the synchronization between simulated entities, and to synchronize lazily i.e. only when needed. This is facilitated by the ability to inform the simulator of events that happen in the future (in @simulatedTime@ terms), as well as of events that happened in the past, where the simulator may need to roll back its state. I first had the rollback handled by the event loop and only later moved it to the simulator itself so that it could decide whether the rollback was necessary (if the event would not change the state, then there was no need to roll back, but only a simulator would know it).
But the driving design principle should be that the interface between the simulator thread and its event loop should be abstract, so that the simulator has only access to the "real world" via the event loop. The key invariant would be that if a simulator snapshot is propagated forward (i.e. work units get done) with the same external events, the resulting simulator state at any chosen future @simulatedTime@ is only a function of those events and of the snapshot state, and nothing else.
An event loop would exist for any thread where a simulator runs, and it'd need to handle cross-thread event propagation. Of course Verilog allows multiple threads of execution, and their fork-joins would need to be communicated via events, and all threads that modify the state they mutually depends on must be treated as an entity of sorts as far as event propagation goes, since their states are intertwined and an event delivered to one thread, if not ignored, is visible via the shared state to other threads. This increases the rendezvous costs somewhat, so in general the Verilog models being simulated should prefer access to shared state in the manner of Hoare's CSP's. I've sidestepped this issue in my CPU emulations by designing the emulated software to only interact in the CSP manner (implemented in a copy-free fashion).
|
OPCFW_CODE
|
Hey guys! Here's a little demo of how to use clipPaths to mask images in Framer. Thanks to Kristoffer Lundberg for the heads-up. :-)
Check out Clippy as well, to easily create a bunch of different shapes: (triangles, hexagons...) http://bennettfeely.com/clippy/
It’s about time to repost this.
Haha. Yes this is totally useful
You are rotating the whole object, right? Did you find a way to animate the clip path itself, such as to mask and unmask a layer like in material design?
Ooh good idea Johannes Eckert - I did indeed rotate the layers entirely. But with the magic of Modulate you can: http://share.framerjs.com/8bh830qkg19h/ :)
Benjamin holy! I will have to try that out as soon as I get to work
Thank you for sharing the last example with animated clipping paths — super helpful! I tried to achieve this with CSS animations a couple of days ago (https://www.facebook.com/groups/framerjs/permalink/652903048170103/) and hit a browser support problem, but your example is different:You are not _animating_ the path, you are actually changing it at every redraw. This is clever (don't know about performance, yet)!
What would be a good solution (or workaround) to get the path "animating" on it's own (e.g. after click) and not depending on the change of another event?
Using "change:rotationZ" is super interesting and I saw it popping up a couple of times here on facebook. I still cannot find it in the documentation, but I guess it's fully supported now?
Killin' it guys. Keep it up!
OK wow — instead of using a custom function to draw this, I like the .on "change:x" event. This allows me to use easings on the change but also the state machine to change the direction mid-transition:
this uses the red little box as a helper layer to animate between x between 0 and 100 and use that as the timeline for the clipPath change.
do you think that's clean enough to use? I have sweet performance during test
I'm hijacking this thread ,-)
here's what I found: I need to clip a layer with children, but this has different results in Safari and Chrome — Chrome will clip all children, while Safari has them sticking out.
but this seems to a problem with Framer Layers. This codepen looks right in Safari: http://codepen.io/frischmilch/pen/MYLwzg
I tried to set _prefer2D on both layers, but with no difference.I can use this and preview in Chrome, but any hint what would make this work in safari on iOS and mac?
Hey Johannes - have you tried layer.force2d = true instead of _prefer2D? :)
^ Benjamin you beat me to it. ;)
that's it! Thank you guys!http://share.framerjs.com/8j36fhde0y6w/
This is very sweet: I am using this on a whole layer with children to start some view transition. I had to use two different magic layers (that's how I call these little helpers) to get around some logical/visual issues with the circle revealing too much when the image was still transitioning:
thank you for your help — this is an awesome trick and it works wonders!
|
OPCFW_CODE
|
This article will tell you how to launch analysis of an embedded project and how to work with the analyzer's report.
The PVS-Studio analyzer supports a number of compilers for embedded systems. The current version allows checking projects built with one of the following compilers under Window, Linux, and macOS:
The installation procedure depends on the operating system you use in development. Under Linux, you can install the analyzer from either the repository, or the installation package.
For example, on a Debian-based system:
wget -q -O - https://files.pvs-studio.com/etc/pubkey.txt | sudo apt-key add -
sudo wget -O /etc/apt/sources.list.d/viva64.list \
sudo apt update
sudo apt install pvs-studio
sudo gdebi pvs-studio-VERSION.deb
Under macOS, you can use Homebrew for installation and updating:
brew install viva64/pvs-studio/pvs-studio
brew upgrade pvs-studio
Another option – installing from the dmg package, or unpacking from the archive manually.
Under Windows, you need to use the installer:
You can download installation packages for each supported system, as well as request a trial key, should you need it, at the "Download and evaluate PVS-Studio" page.
Once the installation is done, you need to enter the license key. The "How to enter the PVS-Studio License and what's the next move" documentation article describes this process in detail in regards to different platforms.
Checking projects built for embedded systems is similar to checking those developed for Windows, Linux, or macOS.
Options available in Linux are described in the "Getting Started with the PVS-Studio Static Analyzer for C++ Development under Linux" article. Keep in mind that embedded projects are cross-compiled, and your compiler can have a non-standard name. Due to this, you might need to specify it when launching the analysis, which you can do via the –compiler, or –c, command-line key.
pvs-studio-analyzer analyze -c MyCompiler
Using it is necessary if the analyzer can't detect the compiler type, that is, if it issues the "No compilation units found" error.
Since the target platform differs from the development one due to cross-compilation, you'll probably also need to specify the target platform via the --platform key, along with the preprocessor type (--preprocessor).
Supported platforms: win32, x64, linux32, linux64, macos, arm.
Supported prerocessors: gcc, clang, keil.
Under Linux, the linux64 platform and the gcc preprocessor are the defaults.
If you're using Windows, you can check your project in the compiler monitoring mode. To do so, use the "C and C++ Compiler Monitoring UI" utility, which comes with the analyzer. To start monitoring, go to the Tools menu and pick Analyze Your Files... This dialog will open:
Click the "Start Monitoring" button and start building your project. When the build finishes, click the "Stop Monitoring" button in the dialog window located in the bottom-right corner of the screen:
The main window of the "C and C++ Compiler Monitoring UI" utility allows you to view the analysis results.
Also, it is possible to start the analysis from the command line by using the CLMonitor utility. Here's the command which will initiate monitoring:
After the build, start it again in analysis mode:
CLMonitor.exe analyze -l "<path>\out.plog"
The analyzer will check your project and save the results to the file specified via the -l key.
See also "Compiler Monitoring System in PVS-Studio".
To view the report under Linux, you need to convert the log file, generated by the analyzer, into one of supported formats. Use the plog-converter utility to do this. For example, you can generate an HTML report, which allows you to view source code, with this command:
plog-converter -a GA:1,2 -t fullhtml /path/project.log -o /path/report_dir
Report conversion is described in more detail in the "Getting Started with the PVS-Studio Static Analyzer for C++ Development under Linux" article.
The Windows version also has a utility named PlogConverter, which is similar in usage to its Linux counterpart:
PlogConverter.exe <path>\out.plog --renderTypes= FullHtml --analyzer=GA:1,2
PlogConverter.exe D:\Projct\out.plog -t FullHtml -a GA:1,2
You can also view reports in plog format with the "C and C++ Compiler Monitoring UI" utility via the File->Open PVS-Studio Log... menu command.
If you need to, you can export the report to one of supported formats via the utility's File menu.
PVS-Studio classifies its warnings according to CWE and SEI CERT, which works quite well for static security testing (SAST) of regular apps. However, embedded systems have different security requirements, covered by a specially developed MISRA standard. The current PVS-Studio version partially supports MISRA C and MISRA C++. You can see the regularly expanding list of supported rules here.
Using MISRA rules when checking non-embedded projects is usually a bad idea, due to the standard's specifics. In most cases, if the code wasn't initially MISRA-oriented, the check would result in many false positives and generally noise warnings. Thus, MISRA rules are off by default.
To enable MISRA under Linux, run the analysis with the -a key and pass a numeric parameter, according to the desired mode. This parameter is a combination of bit fields:
-a [MODE], --analysis-mode [MODE]
MODE defines the type of warnings:
1 - 64-bit errors;
2 - reserved;
4 - General Analysis;
8 - Micro-optimizations;
16 - Customers Specific Requests;
32 - MISRA.
Modes can be combined by adding the values
Example (with GA and MISRA rules enabled):
pvs-studio-analyzer analyze -a 36 -o /path/report.log
Also, you need to tell plog-converter to include MISRA warnings in the resulting report:
plog-converter -a MISRA:1,2,3 -m misra ....
Under Windows, you can use the "C and C++ Compiler Monitoring UI" utility's settings:
See the "PVS-Studio: Support of MISRA C and MISRA C++ Coding Standards" article to learn more about MISRA support.
In this article, we've briefly discussed the PVS-Studio analyzer's features for checking embedded-targeted projects. If you need more information on using the analyzer, I recommend that you refer to the following articles:
Date: Feb 21 2022
Author: Vladislav Stolyarov
Date: Feb 01 2022
Author: Vladislav Stolyarov
Date: Dec 23 2021
Author: Andrey Karpov
Date: Sep 06 2021
Author: Nikolay Mironov
Date: Aug 03 2021
|
OPCFW_CODE
|
Please try this In 4.7.6
-Add an exposed/public float variable to a Character template BP, such as third person
-Every tick print the value of this float variable
-Go into Play In Viewport (PIE)
-Alt Tab so you can go to and edit the value (WHILE the game instance is still runing)
-Watch the printed value constantly change as you edit the value, DURING gametime / PIE / runtime!
#The Bug in 4.8
Try to do the same in 4.8, you can’t, because the blueprint defaults are now greyed out while the game instance is running
Is there a way to restore the 4.7 functionality of editing default values during runtime?
Editing defaults during a PIE instance made polishing a game EXTREMELY fast!
Thanks for the report, but I’m not able to reproduce this in 4.8.0. To test, I created a new Actor Blueprint with an editable float variable and dropped it in the level. During PIE, I selected the Actor in the World Outliner. In the Details panel, the variable appeared as expected, and I could set the value by typing or using the slider. Can you test this in a new project in the 4.8.0 binary for me and see if it happens there as well? Or am I missing a step somewhere? Thanks!
We haven’t heard back from you in a while on this. Are you still able to reproduce this issue? I’m going to resolve this for now, but if you still see this issue in 4.8.0 please feel free to respond with the information requested above and we’ll keep looking into it. Thanks!
Ah, I see where the confusion was. I’ve been using 4.8 for a while now, and I forgot that had changed. Thanks for the clarification!
#Thank You Ben!
Thank you for your answer Ben!
Prior to 4.8 I could edit the default settings of the Blueprint asset itself, your method describes and demonstrates that I can now only edit properties of instances of the blueprint
Again, Ben’s solution for anyone who is curious:
“. During PIE, I selected the Actor in the World Outliner. In the Details panel, the variable appeared as expected, and I could set the value by typing or using the slider”
I have noticed that when you change the value on the instanced variable, it doesn’t update the default value in the original blueprint when it is edited. Is there a way we can change this default value while the game is running in a PIE.
Edit: I noticed there is a way to Set instance to defaults, but this doesn’t help me as there are parts of the C++ code that change these default values which I don’t want to have changed at the start. I also get a crash when trying to set the default values to the new instanced ones as there are a few tick functions trying to access my HUD which doesn’t seem viable. I was hoping there was still and old way to edit these values.
Changing the value on an instanced blueprint’s variable only changes that instance’s value. This is true whether in PIE or simply in the editor. There isn’t a way to convert an instance’s values into the default blueprint’s values.
Please open a new post for the crash you’re getting so we can investigate that fully. Include as many details as you can, like reproduction steps and crash logs. Thanks!
What is the way to get the values I am changing to then be saved to then be kept for the default values of the blueprint (as it seemed to do when opening the blueprint and changing values pre-4.8).
For now, I am having to:
Open the instance of the blueprint.
Change the values while running a new editor window.
Take a mental note/save of the values.
Stop the game.
Open the actual blueprint and add the values into the default values.
This seems quite laborious and as Rama mentioned - “Editing defaults during a PIE instance made polishing a game EXTREMELY fast!”
I have managed to find a solution that may work for you, though I would highly recommend changing these variables outside of PIE to prevent potential errors from occurring. If you select an instance and make changes, then press the edit blueprint dropdown menu, there is an option to apply instance changes to Blueprint, which should override the variables within the blueprint, please try this and let me know if it is what you are looking for.
I have seen that option and it is basically what I’m looking for. However, due to my code also changing variables from the defaults in the blueprint, it means that all those changes will be applied to the defaults (which I don’t want to happen). I also get a crash when attempting this on my HUD blueprint, as my character class is ticking, it is trying to access the HUD class and I’m assuming something happens with that instance the character is using when I try to apply defaults which causes the HUD to become NULL.
Unfortunately this is the only option that we have available. In regards to the crash, please make a new post in the bug reports section so we can assist you specifically with this error. In the crash report, please include your callstack, crash logs, and what steps you are taking to reproduce this on your end. Additionally, if there is any information that you find may be helpful or pertinent, please include it in the new crash report.
Thanks for the help, I’ll gather some info on the crash, but like I said it doesn’t really matter as my defaults are changed while playing as well.
This option however used to be available until I upgraded to 4.8 as Rama mentioned in the OP. If you could give some direction to where it greys out the defaults while you PIE and I can attempt to disable it via the source code?
Most unfortunate. This bug is unnecessary, and very irritating. You CAN STILL EDIT DEFAULT VALUES by PASTING TEXT INTO THE PROPERTIES (in 4.8.1 at least). Let us edit them outright!!
The process for changing settings is now:
See how settings look.
Make changes to blueprint.
See how settings look… and waste A LOT of time because of this.
The tweaking of settings never really ends, you just run out of time.
I now have MUCH less time to polish every aspect of the game I work on.
Changing values in the Details panel of the instance is not possible if you use a ChildActorComponent to spawn the actor/blueprint you are trying to modify because… the editor won’t let you see it’s details. Using ChildActorComponents for multiple cameras is necessary because view targets are specified by Actor and not Component. If I make the camera a component of the actor and try to modify the FOV on the instance… the postprocess settings are still grayed-out.
Dune: if I find where we can disable the disabling of the UI in the editor code, I will post back.
|
OPCFW_CODE
|
rsc's Diary: ELC-E 2017 - Day 1
This is my report from the first day of Embedded Linux Conference Europe (ELC-E) 2017 in Prague.
My plan for the first timeslot at 09:00 was to see my colleague Marc Kleine-Budde's talk about "OP-TEE - Using Trust Zone to Protect Our Own Secrets", but the room was so fully crowded that I didn't get a seat any more (embedded folks seem not to be too happy to join keynotes early in the morning and prefer tech talks...). So I started the day with a little bit longer coffee break... Obviously, more conference attendees were also not amused, so the talk will be repeated on wednesday.
Linux Powered Arctic Buoys
In the second talk, Satish Chetty reported about "Linux Powered Autonomous Arctic Buoys".
The speaker is operating some of those self built and low cost sensor carriers in Alaska as a part of research activities and talked about the experiences he made with operating an embedded linux / ARM based platform at temperatures as low as -40 °C. The devices are being used for research of arctic ice meltdown in summer (while "summer" is not what you think of, when it comes to arctic locations).
One of his most important topic was to save as much power as possible with the measurement device: it schedules one measurement cycle, then shuts down with the help of a microcontroller and then wakes up hours later to start the next cycle.
Surprisingly, it was found out that standard mobile phone battery packs are optimal for power supplies, as these can be easily replaced by untrained staff (such as bear hunters) under harsh environmental conditions, and even can be charged in the research station with standard equipment. However, some other problems such as cable bites by polar foxes or ice bears sitting on the sensor carriers could not be solved so far.
In the next talk, Walt Miner gave an insight into the Automotive Grade Linux project of the Linux Foundation. Unfortunately, the talk was very high level and didn't go into much technical details. The main message was to encourage the automotive community for more collaboration.
During the last slot before lunch, Anna-Maria Gleixner and Manuel Traut reported about their experiences with Jenkins and libvirt to automate their testlab for the Preempt RT testing. It turned out that many of their requirements are quite similar to our own (which motivated us to develop labgrid).
Unfortunately, it seems like test automation is currently a hot topic, and everyone looks around, doesn't find a project that fits his need and starts a new one.
This impression continued during Andrew Murray's BoF session "Farming Together": many of the attendees operate a test farm, but almost nobody uses the same technology. Agreement can be achieved in the upper parts of the stack: people use Jenkins and LAVA there (we do that as well, for automatic builds and for our kernelci.org testlab).
Tim Bird encouraged the crowd to create a mailing list and a wiki page, which was created during the BoF session on elinux.org. On that page, contact data of interested people will be collected.
The "Bash the Kernel Maintainers" BoF by Laurent Pinchart dealt with the well known problem to get kernel maintainers into accepting certain patches. One problem is that, even small contributions, sometimes fall into some kind of "black hole" and nobody answers any more. People only recognise after months that the path got lost. In many cases, it is especially a problem for small consulting companies, who might have moved on to other topics or even don't have the respective hardware any more when feedback finally arrives. Hans Verkuil proposed to talk about this kind of issues openly on the mailinglists; maintainers might be more responsive if they know those issues.
The conclusion of the session was: ping politely, and continue talking to the maintainers.
Another highlight for the Pengutronix team was Jan Lübbe's talk about "Automation beyond Testing and Embedded System Validation". The background of those activities is that Pengutronix quite often integrates many (constantly changing) Open Source components for customer projects. As updating and rollout of updates doesn't work without intensive testing, it seems to be time for more automation in that area. However, it was an important criterium to use the same remote control technology for automatic tests as well as for interactive work at the developer's desks, as many of the embedded targets are only available in small prototype quantities.
Other features which could not be fulfilled with the existing frameworks are testing of updates (including booting of devices during the test) or concurrently remote-controlling more than one device (i.e. video streaming box + receiver). Jan gave a shot overview about the currently existing test frameworks and explained why they didn't turn out to be a solution for our requirements.
labgrid provides a python based infrastructure. Remote control devices such as power switches or USB serial converters are abstracted as drivers. An important feature are the "strategies": they can be used to instruct the system to bring the device-under-test into a certain state (i.e. the Linux userspace commandline). If necessary, the system is automatically booted or provisioned with new firmware behind the scenes. The tests themselfs are written in pytest, the output is generated as JUnit XML files and can for example be visualized in Jenkins.
For labs which consist of more than one PC, the labgrid concentrator does the coordination of the available resources.
Besides the test functionality, labgrid can also be used in production lines, to put the firmware into the devices and do the production tests.
At the end of the talk, he showed a demo and automated the upgrading of a qemu based simulated hardware with the help of RAUC.
In a few hours, the 18th FrOSCon will begin at the Bonn-Rhein-Sieg University of Applied Sciences. Pengutronix will be there again with a small team. At one of the partner booths we will show some of our activities in the open source community. We will bring our labgrid demonstrator and the FPGA demo.
Django is Pengutronix' framework of choice for internal applications that handle our business processes. These internal tools are also a great opportunity to try out current developments in the Django universe.
After the Corona break, the Chemnitzer Linux-Tage will actully take place in real life again, and the Pengutronix team will have eight (!) talks there.
|
OPCFW_CODE
|
Information Analytics & Knowledge Science Certification Course
Posted by marketer on May 14th, 2020
The brand new Certification in Enterprise Knowledge Analytics (IIBA®- CBDA) recognizes your ability to effectively execute evaluation related work in assist of business analytics initiatives. It is indeed the very best time to enhance your experience in information science. Deploy machine learning algorithms to enhance enterprise decision making. The flexibility to allow the organizations or enterprises to boost their determination driving skills is what that has pushed an immense craze for the Information Science throughout the enterprise and industrial sector. After efficiently completing Knowledge Science course you are capable for good package deal, as a more energizing additionally there is a great scope of starting your career within the discipline of Information Science.
Any certification course in data analytics courses will add a feather in your cap, reason for the fingers-on experience on the instruments(Python, R, SQL ) skilled throughout the course. Be taught from world-class data science practitioners. You may study Python programming on Anaconda, which hosts widespread Python libraries like SciKit Be taught, NumPys, Pandas and others.
So far as data science programs in Bangalore, Acadgild is one among the many top-rating on-line bootcamps for Data Science. Use data science to catch criminals, plus find new ways to volunteer personal time for social good. The skills focused on in this program will assist prepare you for the function of a Knowledge Scientist. Applicants ought to have a bachelor’s degree in a STEM field or in a business field and a GPA of three.0. Students who do not have a technical background take a placement exam to find out which prerequisite laptop science or math courses they have to take.
Because studying knowledge science is tough. This course is best for freshmen and it gives you full exposure of every subject of Data Science and Machine Studying. Lessons are delivered online in eight-week terms, and most college students take one or two programs per term. Uncover the top tools Kaggle members use for knowledge science and machine learning.
6 out of 10 builders are gaining or trying to acquire abilities in machine learning and deep studying. 5) The most important benefit involved in going for data science coaching is that the achieve isn’t only fast; it has monumental long-term benefits. There are modules on machine learning, statistical ideas comparable to decision timber, regression, clustering and classification, and so forth.
This Business Analytics course Coaching in Bangalore will provide you with extra confidence in case you are fascinated to pursue your profession in the monitor of DWH, BI, Data Science & BPM Class. Nonetheless the problem is that in spite of job availability in immense there may be nonetheless lack of expert” data staff in the space of analytics. The course is designed to assist anyone excited about information science learn it so long as they’ve fundamental information of programming and math and respectable reasoning capability.
One must possess arms-on expertise within the instruments used within the subject of Knowledge Science. Develop into job prepared for a profession in Knowledge Science. Analytical talent is in excessive demand, so learning knowledge science abilities can open doors to new career alternatives. Accumulate, mannequin, and deploy information-driven techniques using Python and machine studying.
ExcelR — Data Science, Data Analytics Course Training in Bangalore
49, 1st Cross, 27th Main BTM Layout stage 1 Behind Tata Motors Bengaluru, Karnataka 560068
Phone: 096321 56744
Hours: Sunday — Saturday 7AM — 11PMAlso See: Knowledge Science, Data Science, Machine Learning, Machine Studying, Science, Knowledge, Data
Jazz Education and Information
The Official Barry Harris Website for Jazz Education and Information
|
OPCFW_CODE
|
Can lexapro help muscle tension is lexapro better, than paxil for anxiety. Anti anxiety medications lexapro when do lexapro withdrawals go, away buspar compared to lexapro. Lexapro and sleeping tablets lexapro head feels weird chemist warehouse, lexapro different generic, lexapro natural remedy for lexapro. Can lexapro help with premature ejaculation cloudy urine lexapro switched from lexapro to, effexor lexapro, linked to suicidal thoughts. Therapeutic dose lexapro ocd lexapro celexa side, effects does benadryl interact with lexapro lexapro, and cialis together.
Change from paxil to lexapro what happens if you mix zoloft and, lexapro are citalopram and lexapro, the same thing. What is the best way to come off, lexapro not gaining weight on lexapro. What are the, symptoms of taking lexapro promethazine with codeine and, lexapro lexapro side effects racing thoughts lexapro oxalate kidney stones. Lexapro or, prozac for pmdd natural remedy, for lexapro lexapro side effects solutions does lexapro come in, 15 mg. 5mg lexapro stopping what, happens when you quit taking lexapro lexapro first week, depression lexapro vs, paxil social anxiety can lexapro cause breathing problems lexapro strattera combination. Lexapro eyelid twitch what are, the symptoms of taking lexapro lexapro how quickly does, it work 40 mg of lexapro too much.
compare lexapro to paxil
What's better for, anxiety zoloft or lexapro phenylephrine lexapro. Safe way to wean off lexapro missed my dose of, lexapro. Does lexapro help with, self confidence lexapro, with sam e is lexapro, and xanax the same thing lexapro heat. Lexapro and abilify bipolar lexapro initial increase anxiety lexapro skin problems wellbutrin xl lexapro combination will lexapro show up on drug, screen. Lexapro for addiction pristiq and lexapro compared can you take effexor, xr and lexapro together weaning from lexapro lexapro prozac together why lexapro stops working. Lexapro and lithium together what happens if u stop, taking lexapro compare lexapro, to paxil therapeutic, dose lexapro ocd lexapro and aleve. Can lexapro, slow down metabolism lexapro and asa not, gaining weight on lexapro can, lexapro cause male infertility. Lexapro switch to, zoloft lexapro and bad dreams lexapro withdrawal 2012 is lexapro used for menopause.
Lexapro for adderall withdrawal what happens when you come, off of lexapro is, lexapro used for weight loss. Fluoride, in lexapro prilosec, and lexapro. Does lexapro help you concentrate taking lexapro and, drinking alcohol drug interactions between, lexapro and wellbutrin lexapro or zoloft better phentermine, topamax and lexapro the difference between citalopram and lexapro. What, if you forget to take lexapro generic for lexapro, 2013 unable to sleep on, lexapro lexapro damage brain. Lexapro 5 mg withdrawal symptoms is, lexapro like cymbalta what medicine is similar to lexapro lexapro, lower blood pressure is, it ok to take fish oil with, lexapro.
lexapro stomach pain
Lexapro keep you awake cough, medicine lexapro accidentally took too much lexapro. Lexapro how long can you take, it what is stronger lexapro or zoloft. Can i take lexapro on an empty, stomach lexapro, or buspar treating adhd with lexapro sexual side effects lexapro buspar and lexapro for anxiety. Lexapro and anxiety, benefits whats better zoloft or lexapro lexapro cause gerd lexapro, and test anxiety what happens if you get pregnant while taking lexapro. Dramamine, for lexapro withdrawal lexapro sleeplessness lexapro and attention deficit disorder lexapro minimum effective dose lexapro cause nightmares. Lexapro peeing a lot how does lexapro relieve anxiety zoloft, celexa lexapro lexapro, balding can you take, lexapro with percocet lexapro what to do, if miss a dose.
Lexapro and vigorous exercise lexapro for, pms anxiety wellbutrin, or lexapro which one is, better. Everything i need to know about lexapro can i chew lexapro. Smoking cigarettes while on lexapro lexapro can cry lexapro made me feel great lexapro, scalp irritation. Does, lexapro have to be taken daily lexapro and, celebrex interactions how to, wean off lexapro safely vyvanse lexapro interaction. Is lexapro generic as, good lexapro and spironolactone interactions mucinex lexapro interaction lexapro day 6 what is the drug, lexapro for.
wellbutrin and lexapro pregnancy
Acne after, lexapro half, dose lexapro cymbalta vs lexapro libido. How, long does anxiety last with lexapro lexapro sore legs feeling sad on lexapro. Clindamycin and lexapro lexapro, lucid dreams lexapro weird thoughts lexapro, tmax celexa, vs lexapro for anxiety. Lexapro for alcoholism, depression can i, break a lexapro in half should i, take lexapro morning or night can you take lexapro for ocd difference between loxalate and lexapro. Imitrex and lexapro, drug interactions negatives, of taking lexapro lexapro detox side, effects acne after lexapro. Lexapro and, ondansetron lexapro side effects ejaculation, problems can you take, lexapro and trazodone together increased anxiety, after starting lexapro. Does lexapro mellow you out 20 mg lexapro for anxiety lexapro sedating lexapro, depression effectiveness not gaining weight on lexapro. Is, citalopram hydrobromide the same as lexapro lexapro, and no feelings is, there anything better than lexapro lexapro 10 or 20 depression treatment with lexapro.
Lexapro, for melancholic depression how, to wean off 20 mg, of lexapro. Does, lexapro have serotonin lexapro, and brittle nails. Smoking cigarettes while on lexapro can you switch from lexapro, to zoloft lexapro delusions can u get addicted to lexapro. Taking lexapro and, drinking alcohol lexapro 5mg reviews prozac vs, lexapro weight loss how does lexapro relieve anxiety. Getting off wellbutrin and lexapro symptoms of, suddenly stopping lexapro does, lexapro make you lose or gain, weight does, lexapro give you hot flashes.
|
OPCFW_CODE
|
Port copy_properties_to from etcd cookbook into core chef
def copy_properties_to(to, *properties)
properties = self.class.properties.keys if properties.empty?
properties.each do |p|
# If the property is set on from, and exists on to, set the
# property on to
if to.class.properties.include?(p) && property_is_set?(p)
to.send(p, send(p))
end
end
end
I think it would be better to have it shaped more like:
declare_resource(:remote_file, name, &block).apply_properties(include: [ :group, :owner ], exclude: [ :headers ])
Maybe include is the obvious enough default to have it be def apply_properties(*include, exclude: [])
Kind of think it should hang off of Chef::Resource so it method chains better like that. And rename it to not conflict with existing cookbooks. Could also write a minimum viable compat_apply_properties cookbook to patch up old chef-client versions so other cookbooks could use it (not the entire crazysauce of compat_resource, but do-just-one-small-thing-only).
@coderanger points out that this could be more useful as well if it was a ..._from API and didn't hardcode self:
declare_resource(:remote_file, name, &block).copy_properties_from(self, [ :group, :owner ], exclude: [ :headers ])
remote_file "name" do
copy_properties_from new_resource, [ :group, :owner ], exclude: [ :headers ]
end
And should return self for method chaining...
And https://github.com/chef-cookbooks/aws/blob/master/resources/s3_file.rb#L117-L139 is Exhibit A for where its useful. @coderanger also points out that it smells like aws_s3_file should subclass remote_file, but we can't do that from custom resources.
Also, @coderanger points out it could take a regexp for include/exclude instead of array of symbols.
Just an overall :+1: to copy over from Slack. I think I am more in favor of limiting the auto-copy to props defined on both sides but not super strongly as there is value in being explicit :)
Yeah it clearly makes it easier to slap a resource over sub resources and not have to thinking about carving out the exact properties interface. At the same time I wonder about spooky-action-at-a-distance effects when core chef resources gain new properties or when the wrapping custom resource has its API extended by properties that someone didn't intend to apply to the sub-resource. Allowing both the include+exclude arguments to be omitted and just have it do what it can sort out itself has a lot of whip-it-up-itude going for it.
Note that for back-compat purposes the property_is_set? helps a lot there because you can introduce a property, that may even have a default value, and if its not set on the custom resource it won't get passed to the sub-resource, and if you're on an old version of chef-client that doesn't implement that property nothing with happen.
For include/exclude the way it works in my head is that excludes trump includes. Which allows you to be lazy and construct a list of all the properties on the custom resource, and then case-by-case do excludes for sub-resources that
don't want certain properties -- so its just [ includes] - [ excludes] in set arithmetic. Of course it is just syntactic sugar for doing exactly that.
Oh so circling back around on this:
# If the property is set on from, and exists on to, set the
# property on to
if to.class.properties.include?(p) && property_is_set?(p)
to.send(p, send(p))
end
If the user specifies something like atomic_update on the custom resource but we're running chef 11.5 (bear with me on the contrived example) and atomic_update does not yet exist on the remote_file resource we really need to blow up with a useful error message, otherwise the user is silently getting old behavior. The property_is_set? check means that if the user does not set atomic_update on the custom resource we won't even try to apply it to the sub-resource, so on chef 11.5 we wouldn't blow up in that case. If the user of the custom resource really intends to use atomic_update on chef >= 11.6 and then ignore it on chef < 11.6 then they need to gate that with atomic_update false if Chef::Resource::RemoteFile.respond_to?(:atomic_update) or something similar.
|
GITHUB_ARCHIVE
|
Cesare Tinelli's Bio
Cesare Tinelli received a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1999 and is currently a F. Wendell Miller Professor of Computer Science at the University of Iowa. His research interests include automated reasoning, formal methods, software verification, foundations of programming languages, and applications of logic in computer science.
Professor Tinelli has done influential work in automated reasoning, in particular in Satisfiability Modulo Theories (SMT), a field he helped establish through his research and service activities. His research has been funded both by governmental agencies (AFOSR, AFRL, DARPA, NASA, NSF, and ONR) and corporations (Amazon, Facebook, General Electric, Intel, Rockwell Collins, and United Technologies). His work has appeared in more than 90 refereed publications, including articles in such journals as Artificial Intelligence, Information and Computation, the Journal of the ACM, the Journal of Automated Reasoning, Formal Methods in System Design, Logical Methods in Computer Science, Theoretical Computer Science, and Theory and Practice of Logic Programming.
He is a founder and coordinator of the SMT-LIB initiative, an international effort aimed at standardizing benchmarks and I/O formats for SMT solvers. He has led the development of the award-winning Darwin theorem prover and the Kind 1 and Kind 2 model checkers. He has co-led the development of the widely used and award-winning CVC3 and CVC4 SMT solvers, and co-leads the development of their successor cvc5. He also co-leads the development of StarExec, a cross community web-based service for the comparative evaluation of logic solvers.
He received an NSF CAREER award in 2003 for a project on improving extended static checking of software by means of advanced automated reasoning techniques; a Haifa Verification Conference award in 2010 for his role in building and promoting the SMT community; and a CAV Award in 2021 for his pioneering contributions to the foundations of the theory and practice of SMT. He has been an invited speaker at conferences and workshops (including CADE, CAV, ETAPS, FroCoS, HVC, NFM, TABLEAUX and VSTTE) and has given invited lectures at numerous institutions worldwide (including UC Berkeley, CEA, CMU, ENS, EPFL, IMDEA, Inria, MIT, MPI, MSR, NYU, Oxford U., Stanford U., and VERIMAG) and international summer schools.
He is an associate editor of the Journal of Automated Reasoning and a founder the SMT workshop series and the Midwest Verification Day series. He has served in the program committee of more than 70 automated reasoning and formal methods conferences and workshops, as well as the steering committee of CADE, ETAPS, FTP, FroCoS, IJCAR, and SMT. He was the PC chair of FroCoS'11 and a PC co-chair of TACAS'15.
He has worked extensively with researchers and developers from companies (including General Electric, Intel, Microsoft, Rockwell Collins, and United Technologies) and governmental agencies (NASA and Onera). His students and postdocs have later taken positions at such agencies, institutions and companies as AWS, Apple, CEA, Comsoft, EPFL, GE Global Research, MathWorks, NASA, OcamlPro, Oxford U., Stanford U., Two Sigma, UCSB, U. of Innsbruck, and U. of Tokyo.
|
OPCFW_CODE
|
Fresh Windows 7 install without Windows DVD
I have two PCs. One with a purchased copy of Windows 7 Ultimate, and the second with an OEM of Windows 7 Home Premium (not Personnel). I have the Ultimate DVD, but can't find the Home Premium DVD.
Everything from now on relates to only the PC with the Windows 7 Home Premium OEM.
I did a fresh install several years ago, so must have had something, but I don't recall what I did. I have the Windows 7 HP recovery DVDs (quantity of 2), but don't want to install all the HP stuff.
How can I initiate a fresh Windows installation?
You download the ISO and burn a copy of the disk yourself. You won't be able to use the Ultimate disk to install Windows 7 Professional.
@Ramhound. Think the HP System Recovery DVDs will work? Where would you recommend downloading the ISO? Thanks
@uuser2161003 - Download it from Microsoft's digital partner. There already exists a question with links on this very website.
That being Digital River? http://superuser.com/questions/272141/how-can-i-reinstall-windows-7-if-i-lost-my-installation-dvd
I believe you mean Professional and not Personnel. Correct me if I'm wrong.
Either way, there is an easy way to maybe do this without any downloads. First you'll need to copy the contents of the mounted DVD into an USB drive (formatted in FAT32 and empty) by running:
xcopy DVDLetter:\*.* /e/h/f USBLetter:\
Once it's done browse the flash drive and inside the sources folder you'll find a file called ei.cfg. Delete it. That will allow to choose any version available on that medium from Ultimate to the most basic one. When you're done just boot from the USB and you'll be able to install the version you require. If it doesn't boot, try another USB drive, I've had problems with certain flashdrives before.
Hope this helps.
Note: This only works with a regular Windows 7 Ultimate installation DVD.
Sorry, I meant Home Premium.
It should still work :)
So, Windows 7 Home Premium will also be on the Windows 7 Ultimate CD?
From my experience, yes.
This link describes the same. Thanks! http://www.mydigitallife.info/how-to-select-any-edition-or-version-sku-of-windows-7-to-install-from-single-edition-dvd-disc-media-or-iso/
If you have an answer to http://superuser.com/questions/736279/modify-and-copy-dvd, please advise. Thank you
@user1032531 - The duplicate question already goes into detail
|
STACK_EXCHANGE
|
On behalf of a customer I would like to discuss the following topic:
There is something going on with our project that I would like to share with the community and check if someone has faced something similar and hear from people more experienced then us which ways we could follow.Our store has two kind of products: "basics" and "news". Following internal comercial strategies, these products have diferent comercial conditions (delivery date, minimum order value) reason why our ERP create one order for each kind of product. On the front, they can be bought together, but there are some verifications we need to proceed before the order be placed and the user need to select two shipping methods and make two payments.. resuming, we need to have two orders on OCC aswell, so we can use the verification schema already implemented to check these 2 orders.The problem is that we have no clue how to split one shopping cart in two orders before the checkout process. We could customize some widget to split the products in 2 orders using some product property and apply all the verifications on each order, but it will be really expensive and we will have a problem with site performance, as a B2B we have really big orders.Has somebody ever seen something like this? Maybe someone has experienced something like and can share the lessons learned.Thanks in advance for your help.
Here some additional information form the customer:
"We need to split the order in all the order process (webpage, cart, checkout and placing order - I need to orders as a result of one order process)."
I would love the B2B split order capability at cart and all the way through checkout for Carolina Biological. It would create a much improved customer experience. We have 2 situations where this would be helpful.1. Right now, we have district level people or school level admin. people that do the ordering for multiple areas on the same campus or different campus locations. Right now they place one order after another in succession, rather than placing one order, becuase they need multiple shipping destinations and separate orders for each department.
2. We also have restricted items,required shipping dates for certain kinds of living materials, and seasonal items. It would be great to split those items into separate orders for tracking purposes for the person doing the ordering of behalf of the end users. Often they get notificaiton of the items that ship immediately, and they communcate to the end users. When they get future commmunication about the same order shipping the items that were "future/special" at the time of the order, it is often missed or deleted from inbox as they remember it already shipped. The end users are of course not happy with Carolina. We are seen as falling down on the job.
I would hope value would be seen for including in on Prem and Cloud.
|
OPCFW_CODE
|
Describe Ether and Ethereum | How to buy Ethereum (ETH)
Ether is like the "fuel" for Ethereum, a digital platform. Created in 2015, People use Ether to buy things and pay for services, just like other online currencies. But it's more than just money – it helps build and run apps on the Ethereum network.
People sometimes use the word "Ethereum" to talk about both the whole platform and Ether itself.
To truly understand why Ether is significant, let's delve into what drives the Ethereum platform.
Ethereum is a decentralized software platform built on blockchain technology. It is open-ended and supports peer-to-peer contracts called Smart Contracts, along with Decentralized Applications, also known as DApps.
Smart contracts enable users to exchange value directly without the need for a middleman. They are agreements with clear terms and established protocols to ensure their enforcement.
In contrast to traditional contracts written in human languages and enforced by legal courts, smart contracts are coded instructions that a computer can execute. This coding eliminates ambiguity and ensures precise execution.
Conventional software applications frequently depend on central authorities to store data and carry out operations on that data. This reliance involves placing trust in the central authority.
Decentralized Applications (DApps) can utilize smart contracts within the Ethereum network to achieve decentralization. These smart contracts have the ability to store data, and the Ethereum network ensures that all data operations adhere to the rules set by the smart contract code. Essentially, the integrity of the data is maintained without the need for a central trusted party.
To contribute to the Ethereum network, developers require the cryptocurrency Ether to develop and operate applications. Ether is essential for covering transaction fees and computational services.
Individuals have the ability to send Ether to other users, and developers can create smart contracts facilitating the purchase, sale, receipt, and transmission of Ether.
Ether is created through a process called mining on the Ethereum platform, where individuals known as "miners" validate transactions.
Miners receive Ether as a reward when they successfully validate a batch of transactions.
Transactions of Ether are recorded and confirmed on a digital public ledger known as the blockchain
How to buy Ethereum
Choose a well established Koinpark platform. It is a Global cryptocurrency exchange platform.
Create an account on the chosen cryptocurrency exchange Platform.
Deposit funds like fiat currencies into your exchange account, most of the exchanges accept USD and INR etc, follow koinpark instructions to deposit funds.
Find Ethereum (ETH) on the Koinpark platform, where you can trade Ethereum. Look for trading pairs, often indicated as ETH to INR, to buy and sell Ethereum
Decide the quantity of Ethereum you want to Purchase (buy ethereum) and place an order
Keep your cryptocurrency in your secure wallet, Koinpark provides Parkwallet, Its generally use to keep safe and secure your cryptocurrency
After checking out various cryptocurrency exchanges in India, I prefer on Koinpark. It's a global cryptocurrency exchange platform accessible in India that supports the trading of the popular cryptocurrency Ethereum.
Koinpark offers a range of trading pairs, such as ETH to INR and USDT to INR, providing users with multiple choices.
|
OPCFW_CODE
|
Sbt to publish to both Sonatype and Bintray
I have a scala library that I just converted from gradle to sbt.
By default it works by publishing to Sonatype upon release. However I also want to publish it to Bintray. The problem is that Bintray sbt plugin is overwriting the original publish to Sonatype.
I know I can sync to Sonatype and Maven central repository via Bintray. However I still like the way Sonatype handle the validation and check before I really can release it to Maven central.
How do I publish to both Sonatype and Bintray from my release server (not relying on Bintray to sync for me)?
While using Bintray's Central sync all the validations are still happening on the Sonatype side. Bintray using the OSSRH APIs to trigger the staging and the publishing.
@JBaruch thank you for your comment. Will I still be able to drop the release from sonatype?
Once you click the sync button we go all the way, without the additional staging approval step (if I understood your question correctly).
That's my point, I want to be able to stop it at sonatype staging repo
You can use Bintray pre-publish as your staging repo and make sure you check that everything looks good there.
@JBaruch, unfortunately that's more manual process than what I have now. Sonatype does all the automated check if my artifacts are compliance to the Central Maven. Why would I want to do manual check just for the sake of putting my artifact in Bintray?
What I really want is for me to publish my artifacts to both Sonatype and Bintray right from my release process. Once Sonatype does the automated check, I can release my artifacts in Sonatype and then click publish in Bintray.
I don't understand. If you want to automate everything, you publish to Bintray and sync to Maven Central all using REST APIs. If Sonatype checks fail, you'll get the error from Bintray. If Sonatype checks pass, you'll have the artifacts in Bintray and Central.
What do I miss?
If Sonatype checks fails, I'll get error from Bintray. Will my artifacts still be released in Bintray if Sonatype checks failed? I want when Sonatype checks pass, I'll have the artifacts i Bintray and Central, but if Sonatype checks fails I want the artifacts are not released both on Bintray and Central.
I ran into the same problem and found a setup that works.
sbt-bintray supports a JVM property flag sbt.sbtbintray, When it is set to false, sbt-bintray will not overwrite the publishTo setting (and a few others). So to publish to both sonatype, just run the sbt publish once with the flag set to true and once to false.
However, I also use the sbt-ci-release plugin, which also overrides the publishTo setting (after bintray), but does not offer a flag to disable this. To workaround this, copy what sbt-bintray does into your own build:
publishTo := {
val old = publishTo.value
val p = (publishTo in bintray).value
if (BintrayPlugin.isEnabledViaProp) p
else old
}
Also see the build:
https://github.com/JetBrains/intellij-compiler-indices/tree/master/sbt-idea-compiler-indices
But by running twice your version will be different, right?
If you use git tags for auto-versioning as in sbt-ci-release, or set the version manually, it will be the same version.
I'll try in my next release you suggestion. Thank you
|
STACK_EXCHANGE
|
ippfind can't detect Xerox WorkCentre 3025BI printer
Hello! I executed the script ./dnssd-tests.sh but it failed to detect the Xerox WorkCentre 3025BI printer.
Output:
For debugging, I executed it with the command bash -x:
bash -x ./dnssd-tests.sh XRX9C934EFA5A2B
+ test 1 -lt 1
+ TARGET=XRX9C934EFA5A2B
+ test -x ../tools/ippfind
+ test -x ./ippfind
+ IPPFIND=./ippfind
+ test -x ../tools/ipptool
+ test -x ./ipptool
+ IPPTOOL=./ipptool
+ test -f ''
+ PLIST='XRX9C934EFA5A2B DNS-SD Results.plist'
+ echo 'testing\c'
+ echo 1,2,3
+ grep c
+ ac_n=-n
+ ac_c=
+ test '' = _fail2 -o '' = _fail4 -o '' = _fail4.1 -o '' = _fail5.3 -o '' = _fail5.5 -o '' = _fail5.5.1
+ cat
+ total=0
+ pass=0
+ fail=0
+ skip=0
+ start_test 'B-1. IPP Browse test'
++ expr 0 + 1
+ total=1
+ echo -n 'B-1. IPP Browse test: '
B-1. IPP Browse test: + echo '<dict><key>Name</key><string>B-1. IPP Browse test</string>'
+ echo '<key>FileId</key><string>org.pwg.ippeveselfcert11.dnssd</string>'
+ ./ippfind --literal-name XRX9C934EFA5A2B _ipp._tcp,_print.local. --quiet -T 5
+ test 1 = 0
++ expr 0 + 1
+ fail=1
+ end_test FAIL
+ echo FAIL
FAIL
It appears that the command ./ippfind --literal-name XRX9C934EFA5A2B _ipp._tcp,_print.local. --quiet -T 5 is unable to locate the printer.
Similarly, the command ./ippfind _ipp._tcp,_print.local. -T 5 also produces no output.
However, using ippfind without any flags successfully detects it:
./ippfind
ipp://XRX9C934EFA5A2B.local:631/ipp/print
Note that the output lacks the ._ipp._tcp.local component.
Here is the output from the driverless command:
driverless
ipp://Xerox%20WorkCentre%203025%20(XRX9C934EFA5A2B)._ipp._tcp.local/
I would greatly appreciate any help or suggestions regarding this issue.
If you look closely, you will see that the commands failing to find your printer are looking for the IPP Everywhere "_print" subtype of the IPP type ("_ipp._tcp") in the domain ".local.": "_ipp._tcp,_print.local.".
Specifying this subtype in the parameter to ippfind will cause ippfind to search only for IPP Everywhere printers, not all IPP printers. Any printer supporting the subtype will respond. (DNS-SD subtypes are discussed in RFC 6763 section 7.1 - https://www.rfc-editor.org/rfc/rfc6763#section-7.1).
Either the printer isn't claiming to be capable of IPP Everywhere certification conformance, or its IPP Everywhere "capability" has been disabled somehow in firmware. If Xerox were to certify this printer, it would need to have its firmware advertise its IPP Everywhere capability using the "_print" subtype.
|
GITHUB_ARCHIVE
|
One of the primary research themes in the IS&UE RCE is developing and evaluating 3D user interfaces for virtual, augmented, and mixed reality. In particular, we are focused on exploring how to bring 3D user interface techniques and concepts into mainstream video games by leveraging the existing body of work in 3DUI and VR and devising new strategies and methodologies for bringing spatial 3D interaction to gamers. Additionally, we are interested in the continued learning and understanding of how humans interact with and are affected by 3D interfaces.
With the release of a variety of new motion controllers for both PC and console gaming, 3D user interfaces are becoming commonplace in modern games. The focus of this work is to explore how to best utilize 3D spatial interaction in the video game domain by examining existing interaction techniques and creating novel ones as well as understanding how these interfaces affect users.
The focus of this project is to explore how technologies that have traditionally been found in virtual reality but are now becoming mainstream in the commerical marketplace affect user performance in video games. Specifically, we are interested in whether 3D stereo as well as head and hand tracking improve a player's ability to learn to play video games and achieve better scores. In addition, we are exploring overall user experience when players use these technologies.
We are systematically exploring recognition of 3D gestures using spatially convenient input devices. Specifically, we are examining existing and developing new algorithms to improve 3D gesture recognition accuracy as well as exploring how many gestures can be reliably recognized with video game motion controllers.
RealDance investigates the potential for body-controlled dance games to be used as tools for entertainment, education, and exercise. Through several dance game prototypes built with Nintendo Wii Remotes and depth cameras, RealDance investigates visual, aural, and tactile methods for instruction and feedback.
3D object selection is highly demanding when, 1) objects densely surround the target object, 2) the target object is significantly occluded, and 3) when the target object is dynamically changing location. Most 3D selection techniques and guidelines were developed and tested on static or mostly sparse environments. In contrast, games tend to incorporate densely packed and dynamic objects as part of their typical interaction. With the increasing popularity of 3D selection in games using hand gestures or motion controllers, our current understanding of 3D selection needs revision. We present a study that compared four different selection techniques under five different scenarios based on varying object density and motion dynamics. We utilized two existing techniques, Raycasting and SQUAD, and developed two variations of them, Zoom and Expand, using iterative design. Our results indicate that while Raycasting and SQUAD both have weaknesses in terms of speed and accuracy in dense and dynamic environments, by making small modifications to them (i.e., flavoring), we can achieve significant performance increases.
After identifying how these selection techniques worked across the various scenarios, we pursued the development of a framework that would allow for dynamically choosing a selection technique in real time, based on contextual information. This Auto-Select framework was designed to allow the easy drop in of any selection technique, making it easy to use. We performed two additional user studies that measured the performance of such a framework against the standard techniques by themselves. Our results showed that while promising, there are many factors that affect how well the framework will operate. Among these are the similarity of techniques used, transitioning between them, and providing user feedback. These factors have been targeted for additional future research.
Presently, we are researching the construction of single selection techniques that operate under different modes, where each mode allows the technique to work well in different conditions. This is essentially taking two separate techniques and internally merging them, while eliminating the disparity between their operations to ensure a smooth transition between the two modes.
We are exploring the use of low cost commercial technology to create an interface capable of navigating scenes using the full body and natural interactions.
In the RealNav project, we made use of Wiimote hardware coupled with a Kalman filter to overcome the challenges faced by a quarterback in an American Football video game. The goal was to maintain natural movements, such as real time recognition of the user moving inside of a small area, along with common gestures such as running and throwing, so that a user could easily pick up and be recognized within the system.
In a similar series of projects, we made use of a combination of the Microsoft Kinect and Sony Playstation Move to accurately track a solder in training without adding more hardware to them than they would already be carrying. We were able to recognize where the user was moving, aiming, and basic gestures such as walking in place, crouching, and jumping. All of this allowed for a natural and immersive environment for the soldier to operate in. This was further expanded with the inclusion of multiple Kinects which could track the solder no matter what direction they were facing and no longer needed the Playstation Move to track which orientation they were facing.
We are also studying how full body interfaces can be used for video games when we used a Wizard of Oz approach and the commercial game Mirror's Edge to determine what natural movements users perform when asked to do a task. In this sense we gained knowledge for a series of full body tasks what the average user would do when asked to perform the task without much other guidance. This created a series of guidelines to be used in future projects.
We present a prototype system for interactive construction and modification of 3D physical models using building blocks. Our system uses a depth sensing camera and a novel algorithm for acquiring and tracking the physical models. The algorithm, Lattice-First, is based on the fact that building block structures can be arranged in a 3D point lattice where the smallest block unit is a basis in which to derive all the pieces of the model. The algorithm also makes it possible for users to interact naturally with the physical model as it is acquired, using their bare hands to add and remove pieces. We present the details of our algorithm, along with examples of the models we can acquire using the interactive system. We also show the results of an experiment where participants modify a block structure in the absence of visual feedback. Finally, we discuss two proof-of-concept applications: a collaborative guided assembly system where one user is interactively guided to build a structure based on another user's design, and a game where the player must build a structure that matches an on-screen silhouette.
|
OPCFW_CODE
|
Re: [GNOME VFS] gob inside gnome-vfs ...
- From: Ian McKellar <yakk yakk net>
- To: Michael Meeks <michael ximian com>
- Cc: Seth Nickell <snickell stanford edu>, vfs <gnome-vfs ximian com>, gnome-hackers gnome org
- Subject: Re: [GNOME VFS] gob inside gnome-vfs ...
- Date: 19 Jun 2002 15:08:24 -0700
On Wed, 2002-06-19 at 02:06, Michael Meeks wrote:
> Hi Ian / Seth,
> On Tue, 2002-06-18 at 19:31, Ian McKellar wrote:
> > Because we want to use GObject in GnomeVFS now and GOB is a sensible way
> > of writing GObjects (not the only sensible way, but one of them).
> Hmm. Have you looked at what Nautilus does with GNOME_CLASS_BOILERPLATE
> ? that and GNOME_CALL_PARENT substantially reduces the amount of GObject
> boilerplate code you have to type - to the level that I would be
> surprised if gob buys you anything - except perhaps writing your headers
> and accessors for you.
I haven't looked at that stuff. Does it take care of signals and
inheritance and stuff for us too?
> I can only believe that gob makes much sense, if you intend to
> radically re-structure your API frequently - which I would view as a
> pretty terrible idea (wrt. bin/API-compat :-). Is it possible to
> consider choosing another sensible way ?
Well gob lets me think about what I'm designing rather than worry about
what I'm typing. I can design the API to be easy to use rather than easy
to implement because when I'm writing GObjects by hand I tend to worry
about implementation complexity - because the more complex the
implementation is the more mistakes I make.
> > I asked Seth this too and aparently thats what George recommended. Are
> > you complaining that we didn't add a dependancy? ;-)
> No - in fact, I loathe gob - as you can probably tell :-) and keeping
> it well out of the dependency stack is a great idea from my perspective.
Why do you loathe gob? I haven't heard any real arguments against it
apart from "its a preprocessor" which seems like a bit of a red herring
when we depend on both orbit-idl and the c preprocessor so heavilly in
GNOME and gob does a better job of keeping out of my way than those two.
> > I'm not how much discussion is really required for a dependancy thats
> > only there if you build from CVS. It wouldn't affect the development
> > platform or even packagers at all. But with the cvs include we don't
> > even change the build process for people who build from CVS.
> Well - the thing is that should I decide to re-write all of bonobo in
> Pascal, with some built in P2C processing so we ship generated C, would
> you start getting twitchy ? How about writing it in lisp and then
> converting it ? I hope someone would wrestle me to the ground and batter
> some sense into me :-)
If there were decent arguments I probably wouldn't be too upset. If it
made the source easier to maintain, more readable and accessible to more
developers then I would probably be in favour of it. I don't think (as
you can see) that straight C is the best language for everything. I
think that its lacking a lot if you want to do object oriented
programming - such as objects for a start :) Gob lets us express the
objects we want to write simply. They're not too complex, theyre
> I understand one of the large factors in the sawfish / metacity
> decision, was un-maintainability due to it being written in a foreign
> language. Given that we have a very broad cross section of hackers
> actually doing the maintenance on gnome-vfs, looking at the ChangeLog in
> recent times I see:
> George Lebl, Jody Goldberg, Kristian Rietveld, Ian McKellar,
> Alex Gravely, Seth Nickell, Mark McLoughlin, myself, Anders
> It would be great if we could all be taught / explained the need for
> gob to, very carefully, and in words of one syllable. I am personally
> prepared to spend some considerable time de-gobbing it / doing whatever
> you think it saves you manually, if only to ensure consistancy,
> debugability, maintainability etc.
C+GObject is a foreign language. I'm slowly learning it. It *is* a
barrier to many people getting involved in GNOME development it seems. I
personally love the GObject object model - its one of my favourites. I
hate the syntax though. GOB is (in my opinion) just a better syntax for
C+GObject. If you're familiar with C+GObject it doesn't take much at all
to understand GOB. If you're familiar with both GObject and Java then
gob is trivial. Additionally gob is one of the best documented parts of
GNOME. Its certainly better documented than GObject.
> For example trying to build HEAD to take a butchers at the generated
> code, I just got:
> In file included from gnome-vfs-method.gob:18:
> gnome-vfs-method.h:42: syntax error before `typedef'
> In file included from gnome-vfs-method.gob:22:
> ../libgnomevfs/gnome-vfs-module.h:46: parse error before `*'
> cc1: warnings being treated as errors
> ../libgnomevfs/gnome-vfs-module.h:46: warning: type defaults to `int' in
> declaration of `vfs_module_init'
> ../libgnomevfs/gnome-vfs-module.h:46: warning: data definition has no
> type or storage class
> ../libgnomevfs/gnome-vfs-module.h:70: parse error before `*'
> Which leaves me gob-smacked, [ ;-> ], now of course, if that was normal
> C I could instantly fix the problem [ presumably
> GNOME_VFS_METHOD_GET_CLASS is not being automagically substituted for
> something meaningful ]. Presumably that means that HEAD is not building
> - or I got something badly wrong
I think its was just an error when Seth was gobifying gnome-vfs-module.
That line in the GOB file sticks out like a sore thumb (well, like a
white line in a bunch of blue lines). I removed that line and the error
went away. The nice thing about GOB is that theres less "automagical
substitution" like in GObject, and more expressive syntax.
> [ incidentally we're trying to keep
> HEAD always building, and always working, always ].
Perhaps we'll have to adopt the old Bonobo policy that developers should
only use tarball releases ;-)
But seriously, HEAD gnome-vfs is *very* unstable right now. That'll
settle down in a few weeks, hopefully much sooner, but till then the
branch is probably the right place to live :(
> Anyway, I'd really, really encourage you guys to explain what's up, and
> why you're going this way, what the advantages of gob are here, and how
> we can help make the other sensible ways of writing gobjects more
> attractive to you.
"I don't know gob" isn't really an argument against GOB that I'll
accept. Its really not hard to learn. I'm aware that theres a lot of
fear and antagonism directed towards gob by much of the community, but
technical xenophoboa isn't going to stop us adopting what we believe to
be a technically better solution thats easy for other people to work
with. As I said before, if you know GObject and you know Java you'll
understand GOB after 15 minutes with the man page and an example file.
> Thanks, and sorry for forcing the issue - but it's better now than
I agree that we need to work this out now. I'm really interested in
hearing what arguments people have against gob. If there are good
reasons for not using it I'm sure you'll be able to convince Seth and I.
] [Thread Prev
|
OPCFW_CODE
|
LSF::JobHistory - get historical information about LSF jobs.
use LSF::JobHistory RaiseError => 0, PrintError => 1, PrintOutput => 0;
( $jobhistory ) = LSF::JobHistory->new( [ARGS] );
( $jobhistory ) = LSF::JobHistory->new( $job );
( $jobhistory ) = LSF::JobHistory->new( [JOBID] );
@jobhistory = LSF::JobHistory->new( -J => '/MyJobGroup/*');
( $jobhistory ) = LSF::JobHistory->new($job);
$jobhistory = $job->history;
... etc ...
$exit_status = $jobhistory->exit_status;
$pid = $jobhistory->pid;
$command = $jobhistory->command;
$cwd = $jobhistory->cwd;
LSF::JobHistory is a wrapper arround the LSF 'bhist' command used to obtain historical information about jobs. See the 'bhist' man page for more information. This provides a more reliable way to obtain the exit status of an LSF job than from the LSF::JobInfo object because the bhist command can search all of the available LSF logs to find the information.
- new( [ARGS] || [JOBID] || $job );
($jobhistory) = LSF::JobHistory->new( [ARGS] || [JOBID] );
Creates a new
Arguments are the LSF parameters normally passed to 'bhist' or a valid LSF jobid or LSF::Job object. The bhist command is automatically called with the -n 0 and -l flags.
Returns an array of LSF::JobHistory objects. Of course if your argument to new is a single jobid then you will get an array with one item. If you query for a number of jobs with the same name or path then you will get a list. In scalar context returns the number of jobs that match that criteria.
Please report them. Otherwise... the parsing of the LSF output can fail if the job names have non-alphanumeric characters in them. You probably shouldn't do this anyway.
The LSF::Batch module on cpan didn't compile easily on all platforms i wanted. The LSF API didn't seem very perlish either. As a quick fix I knocked these modules together which wrap the LSF command line interface. It was enough for my simple usage. Hopefully they work in a much more perly manner.
Mark Southern (firstname.lastname@example.org)
Copyright (c) 2002, Merck & Co. Inc. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the terms of the Perl Artistic License (see http://www.perl.com/perl/misc/Artistic.html)
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 155:
You forgot a '=back' before '=head1'
|
OPCFW_CODE
|
How to improve SQL inner join performance?
How improve this query performance second table CustomerAccountBrand inner join
taking long time. I have added Non clustered index that is not use. Is this is split two inner join after that able concatenate?. Please any one help to get that data.
SELECT DISTINCT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode,
CAB.BrandCode
FROM dbo.CustomerAccountRelatedAccounts RA -- Views
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
INNER JOIN dbo.CustomerAccountBrand CAB ---- Taking long time 4:30 mins
ON CA.AccountNumber = CAB.AccountNumber
AND CA.ShipTo = CAB.ShipTo
AND CA.SystemCode = CAB.SystemCode
ALTER VIEW [dbo].[CustomerAccountRelatedAccounts]
AS
SELECT
ca.AccountNumber, ca.ShipTo, ca.SystemCode, cafg.AccountNumber AS RelatedAccountNumber, cafg.ShipTo AS RelatedShipTo,
cafg.SystemCode AS RelatedSystemCode
FROM dbo.CustomerAccount AS ca
LEFT OUTER JOIN dbo.CustomerAccount AS cafg
ON ca.FinancialGroup = cafg.FinancialGroup
AND ca.NationalAccount = cafg.NationalAccount
AND cafg.IsActive = 1
WHERE CA.IsActive = 1
What is the definition of this non-clustered index?
Do you have a composite index on CA's and CAB's account number + ship to + system code? If not, you might want to test it on a dev/test server and see if that cuts down on your time. You might find value in adding same composite index on RA
How to create composite index? It is working fine in my test/dev server but problem is prod server
Have you looked at the execution cost in Enterprise Manager to see where it's spending it's time?
I dont have access to see the Execution plan. Is this possible to modify the Script.
@Hasanshali It might be posible to see the execution plan cost without Enterprise Manager, but I don't know how. Sorry.
How much data do you have in test/dev vs Prod in that view and table? 2.Check if indexes are same on dev and prod 3. Try updating stats on prod
Terry Carmen@ i will share Execution plan tomorrow.
Karthick@ CustomerAccountRelatedAccounts Views --- 8 million records and CustomerAccountBrand table 50 thousands
how long does just the select statement on view take? how many records does the first join get? can you share the script of your view? try creating a table variable with index on RelatedAccountNumber ,RelatedShipTo and RelatedSystemCode and copy content of view to it and join on that table variable
Why do you need to join with same table thrice (2 time in view and 1 in your query) can't you just use data coming from view?
views take 3 min but i split inner join it is taking 3 seconds see the query below
SELECT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode
FROM dbo.CustomerAccountRelatedAccounts RA
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
GROUP BY RA.AccountNumber,
RA.ShipTo,
RA.SystemCode
Views logic ------- One AccountNumber tied up with multiple account number that's why we used self join same table
You need to push back. Being asked to write queries and not having access to view the execution plan is stupid to new levels. Do you have multiple views here? There certainly seems to be some ways to improve performance in what I see what so far.
From my experience, the SQL server query optimizer often fails to pick the correct join algorithm when queries become more complex (e.g. joining with your view means that there's no index readily available to join on). If that's what's happening here, then the easy fix is to add a join hint to turn it into a hash join:
SELECT DISTINCT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode,
CAB.BrandCode
FROM dbo.CustomerAccountRelatedAccounts RA -- Views
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
INNER HASH JOIN dbo.CustomerAccountBrand CAB ---- Note the "HASH" keyword
ON CA.AccountNumber = CAB.AccountNumber
AND CA.ShipTo = CAB.ShipTo
AND CA.SystemCode = CAB.SystemCode
|
STACK_EXCHANGE
|
'use strict';
/**
* Microprofiler.
* (C) 2014 Alex Fernández.
*/
// requires
var Log = require('log');
var testing = require('testing');
// globals
var log = new Log('info');
var profilers = {};
var enabled = true;
/**
* Convenience function, just returns process.hrtime().
*/
exports.start = function()
{
return process.hrtime();
};
/**
* Measure from the given time, with the desired key. Returns the time in µs. Params:
* - before: time generated by process.hrtime() from which to start measuring.
* - key: optional identifier for the profiling.
* - showEvery: optional parameter to show results every given iterations.
*/
exports.measureFrom = function(before, key, showEvery)
{
if (!enabled)
{
return;
}
var elapsed = process.hrtime(before);
var diffUs = elapsed[0] * 1e6 + elapsed[1] / 1000;
if (key)
{
getProfiler(key, showEvery).measure(diffUs);
}
return diffUs;
};
function getProfiler(key, showEvery)
{
if (!profilers[key] || typeof profilers[key] == 'function')
{
profilers[key] = new Profiler(key, showEvery);
}
return profilers[key];
}
/**
* Test how long profiling takes.
*/
function testProfilingProfiler(callback)
{
var runs = 10000;
for (var i = 0; i < runs; i++)
{
var start = exports.start();
exports.measureFrom(start, 'fake');
exports.measureFrom(start, 'profile');
}
var stats = exports.getStats('profile');
testing.assert(stats.meanTimeUs < 5, 'Profiling should take less than 5 µs, took: ' + stats.meanTimeUs, callback);
testing.success(callback);
}
/**
* Show profiling data for a key.
*/
exports.show = function(key, showEvery)
{
if (!enabled)
{
return;
}
getProfiler(key, showEvery).show();
};
/**
* Get an object with stats.
*/
exports.getStats = function(key, showEvery)
{
return getProfiler(key, showEvery).getStats(key);
};
/**
* Disable the whole module.
*/
exports.disable = function()
{
enabled = false;
};
/**
* Measure some times, show every few requests.
*/
function Profiler(name, showEvery)
{
this.name = name;
this.showEvery = showEvery;
this.requests = 0;
this.timeUs = 0;
}
/**
* Take a measurement, show results every few requests.
*/
Profiler.prototype.measure = function(elapsedUs)
{
this.requests += 1;
this.timeUs += elapsedUs;
if (this.showEvery && this.requests % this.showEvery === 0)
{
this.show();
}
};
Profiler.prototype.show = function()
{
var stats = this.getStats();
log.info('Profiling %s: %s requests, mean time: %s µs, rps: %s', this.name, this.requests, stats.meanTimeUs, stats.rps);
this.requests = 0;
this.timeUs = 0;
};
Profiler.prototype.getStats = function()
{
return {
key: this.name,
requests: this.requests,
timeUs: this.timeUs,
meanTimeUs: (this.timeUs / this.requests).toFixed(3),
rps: Math.round(this.requests / (this.timeUs / 1e6)),
};
};
Profiler.prototype.toString = function()
{
return 'profiler for ' + this.name;
};
/**
* Test the profiler.
*/
function testProfiler(callback)
{
var runs = 100000;
var start = exports.start();
var profiler = new Profiler('first');
var before;
for (var i = 0; i < runs; i++)
{
before = exports.start();
var elapsedUs = exports.measureFrom(before);
testing.assert(elapsedUs, 'measureFrom() should return something', callback);
profiler.measure(elapsedUs);
}
var stats = profiler.getStats();
testing.assert(stats, 'No profiler stats', callback);
testing.assert(stats.timeUs, 'Profiler stats should not be zero', callback);
for (i = 0; i < runs; i++)
{
before = exports.start();
var measureFrom = exports.measureFrom(before, 'second', runs);
testing.assert(measureFrom, 'measureFrom() should not decrease', callback);
}
var fromStart = exports.measureFrom(start, 'fromStart');
testing.assert(fromStart, 'measureFrom() start should not be zero', callback);
testing.success(callback);
}
/**
* Run all tests.
*/
exports.test = function(callback)
{
log.debug('Running tests');
testing.run([
testProfilingProfiler,
testProfiler,
], callback);
};
// run tests if invoked directly
if (__filename == process.argv[1])
{
exports.test(testing.show);
}
|
STACK_EDU
|
Look for events like this: where vc_event_type field contains com.vmware.vim25.vmreconfiguredevent
2017-04-24 17:47:42.112 HOSTNAME vcenter-server: Reconfigured VM_NAME on VCENTER_SERVER_NAME in North America Remote Sites.
config.hardware.device(1000).device: (2000, 2001, 2002) -> (2000, 2001, 2002, 2003);
config.hardware.device(2003): (key = 2003, deviceInfo = (label = "Hard disk 4", summary = "1,310,720,000 KB"), backing = (fileName = "ds:///vmfs/volumes/58f5ded1-0b831a8c-2eed-ecb1d79d3200/VM_NAME/VM_NAME4_3.vmdk", datastore = 'vim.Datastore:1088664C-8D55-4361-99E5-2EDEA6Z1X838:datastore-39958', backingObjectId = "", diskMode = "persistent", split = false, writeThrough = false, thinProvisioned = false, eagerlyScrub = <unset>, uuid = "6000C299-a1e6-355c-bdzc-31cc6fa65bc4", contentId = "cacfc4fc6f44ea830785b146fffffffe", changeId = <unset>, parent = null, deltaDiskFormat = <unset>, digestEnabled = false, deltaGrainSize = <unset>, deltaDiskFormatVariant = <unset>, sharing = "sharingNone", keyId = null), connectable = null, slotInfo = null, controllerKey = 1000, unitNumber = 3, capacityInKB = 1310720000, capacityInBytes = 1342177280000, shares = (shares = 1000, level = "normal"), storageIOAllocation = (limit = -1, shares = (shares = 1000, level = "normal"), reservation = 0), diskObjectId = "1-2003", vFlashCacheConfigInfo = null, iofilter = <unset>, vDiskId = null);
These events tell you what changed in the reconfig. Once you fine tune your query ( in your case for memory size of vm ; you would look for text like - config.hardware.memoryMB: 1024 -> 4096; ) then you can use the Create Alert from Query option to create the alarm.
Hope this helps.
I use this in my Security Operations Dashboard (aac-lib/vli at master · Texiwill/aac-lib · GitHub) and it represents the changes made to virtual hardware either by hand or by script. You usually end up seeing quite a few of these events during backup for example. This really has nothing to do with 'adding' a VM. But changes to the VM. The issue is that for any vmreconfigureevent you see, you may see multiple events grouped together or only one. There are also 2 layers to any event... vCenter and ESXi. The ones you listed there are vCenter and it does not say much, but if you look for vmreconfigureevent you end up with those coming from vpxd that show the real changes.
What were you expecting to see?
The best way to catch everything you want is to make the change or add something you want to track in loginsight. Then search for the name of the item (i.e VM-Name) and see what shows up. Then you can create a general rule/search for that item/element.
Edward L. Haletky aka Texiwill
VMware Communities User Moderator, VMware vExpert 2009-2017
Virtualization and Cloud Security Analyst: TVP Strategy
Blue Gears Blog: vSphere Upgrade Saga
Podcast: Virtualization and Cloud Security Round Table Podcast
|
OPCFW_CODE
|
In special elections and primaries, New York City has used ranked choice voting since 2021. Although it has exited for nearly 800 years and is used in a number of voting systems around the world, most elections in the United States still use “first past the post.” What is ranked choice voting? What does it mean for NYC?
Please do not hesitate to reach out with any questions or suggestions for future video topics.
- McNulty, Frederick. (Feb 13, 2019). “Would a radical rule change improve voting in Connecticut?” Mieum Media. https://youtu.be/VEPAHUHr8x4
- Kivelson, Adrienne. What Makes New York City Run?: A Citizen’s Guide to How City Government Works. 4th ed., League of Women Voters of the City of New York Education Fund, 2019.
- Berg, Bruce F. New York City Politics: Governing Gotham. Rutgers University Press, 2018.
- Leadon, Fran. Broadway: A History of New York City in Thirteen Miles. W.W. Norton & Company, 2020.
- Wang, Vivian. (Nov 5, 2019). “N.Y. Election Results: Voters Approve All 5 Ballot Measures.” The New York Times. https://www.nytimes.com/2019/11/05/nyregion/ny-nj-election-results.html
- WNYC Newsroom. (Oct 10, 2019). “This Election Day, New Yorkers Could Change Future Voting Rules.” WNYC. https://www.wnyc.org/story/election-day-new-yorkers-could-change-future-voting-rules/
- Color Palette #4439. https://colorpalettes.net/color-palette-4439/
- Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Ribalta-lulio.jpg
- CBS News York. (Feb 2, 2021). “Queens Residents Test Out New Ranked-Choice Voting In Special Election.” CBS. https://www.cbsnews.com/newyork/news/queens-residents-test-out-new-ranked-choice-voting-in-special-election/
- Data for Progress. https://www.dataforprogress.org
- PIX11 News. (Jun 4, 2021). “Sliwa vs. Mateo.” PIX11 News. https://www.youtube.com/watch?v=K4jIjlbB9TU
- Data for Progress. https://www.filesforprogress.org/datasets/2021/6/dfp_nyc_pre_election_mayoral_comptroller_crosstabs.pdf
- Audio: “Beneath the Surface” by South London HiFi
|
OPCFW_CODE
|
Is it posible to make spock specification conditional on property from Spring's application.properties?
Background:
project logic in Java 11 and Spring Boot 2.6.6
some project features are conditionally available depending on specific application properties, some Spring components related with conditional features are also dependent using @ConditionalOnProperty annotation on component
tests (also integration) are written in groovy and spock framework (ver. 2.1-groovy-3.0)
Question:
Is it posible to make spock specification conditional on property from spring's application.properties?
Spock framework provides annotations which make test conditional.
Most accurate seems to be @Requires for my case.
(https://spockframework.org/spock/docs/2.1/all_in_one.html#_requires)
Condition is based on PreconditionContext (https://spockframework.org/spock/docs/2.1/all_in_one.html#precondition_context).
Simplified Specificatiotion example (two working @Requires annotations left as example, but they do not check what is needed in my case):
import org.spockframework.runtime.extension.builtin.PreconditionContext
import org.springframework.boot.test.context.SpringBootTest
import org.springframework.test.context.ActiveProfiles
import org.springframework.test.context.ContextConfiguration
import spock.lang.Requires
import spock.lang.Specification
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ActiveProfiles('integration')
@ContextConfiguration(classes = TestSpringBootApplication)
//TODO: How to make this feature dependent of property from application.properties?
//@Requires(reason = 'Specification for AAA feature enabled', value = { isFeatureAAAEnabled() })
//@Requires(reason = 'Test run only on Linux', value = { PreconditionContext preconditionContext -> preconditionContext.os.windows })
class ConditionalSpec extends Specification {
//Some conditional components @Autowired
//feature methods
def "one plus one should equal two"() {
expect:
1 + 1 == 2
}
private static boolean isFeatureAAAEnabled() {
true
}
}
What do you want exactly, is it enough to just not run any tests but still start the spring context, or do you want to also avoid starting the spring context?
If it is the first one, then you can use instance or shared from the Precondition Context. If you enable shared field injection you should be able to do this.
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ActiveProfiles('integration')
@ContextConfiguration(classes = TestSpringBootApplication)
@EnableSharedInjection
@Requires(reason = 'Specification for AAA feature enabled', value = { shared.myValue == 'featureAAA' })
class ConditionalSpec extends Specification {
@Value('${value.from.file}')
@Shared
String myValue
//feature methods
def "one plus one should equal two"() {
expect:
1 + 1 == 2
}
}
If you can't use shared injection due to it's limitations, then you'll have to replace shared by instance in the condition.
If you want to avoid starting spring, then you'll have to write your own extension to figure out what the value from the application.properties, and skip the spec yourself.
Initialization of Spring context is OK as long as works properly concerning conditional components (declared with @ConditionalOnProperty).
Shared property in @Requires as you proposed with @Autowired(required=false) for injected beans used the conditional Specification seems to be solution in my case.
|
STACK_EXCHANGE
|
First, could I get time for each kernel by
sm__cycles_elapsed.avg / sm__cycles_elapsed.avg.per_second?
Second, since I can profile
sm__throughput.avg.pct_of_peak_sustained_elapsed,how could I compute the program level utilization? Should I use
kernel_time (from first question) * sm__throughput.avg.pct_of_peak_sustained_elapsed then divide by
total program duration?
Are you looking for some type of value representing how much the SMs were used compared to the entire application (which may include CPU time etc…)? For example, if the program took 10 seconds, the kernel took 5 of those seconds, and during the kernel there was an average of 50% SM utilization, then the value you’re looking for is (0.5 x 5)/10 = 25%
If that’s the case, then you’re on the right track. The only thing I would mention is that instead of calculating kernel time with the formula above you could use the gpu__time_duration.sum which calculates it directly.
If you’re looking for something else, please clarify. Thanks.
Thanks, that is what I am looking for.
Also, if I plan to profile the memory bandwidth usage of the program, should I use
gpu__dram_throughput.avg.pct_of_peak_sustained_elapsed is what you’re looking for if you want to know about the GPU DRAM usage. The other metric include other levels of the memory hierarchy like L1 and L2.
I have a new question.
How could I have the SM occupancy?
I see there is a metric, Achieved Occupancy, which is it the ratio of the average active warps to the maximum active warps allocated for the kernel.
Does this mean if the kernel is allocated 10 wraps, and 6 wraps are used for computation in average, then Achieved Occupancy is 60%?
Or should I use the metric Speed of Light SM [%] x Achieved Occupancy to get the actually occupied wraps (threads)?
Achieved Occupancy is active warps/active cycles. The value represents how many warps were active on average for a given cycle. For example, on GA100 this would be between 0 and 16. This occupancy can be impacted by the way your application divides the work and also hardware resource limitations like the register file, shared memory etc…
Speed of Light SM [%] x Achieved Occupancy isn’t really a calculation we would use.
Do you mean there is no way to get the kernel SM occupancy?
I’m not sure what you mean by “kernel SM occupancy”. Could you expand on that term to explain exactly what you are looking for?
Do you mean the max_warps_per_sm for a100 is 16?
Also, I find a formula about Achieved Occupancy, which equal to (Active_warps / Active_cycles) / max_warps_per_sm.
As you said before, current metric, achieved_occupancy, is equal to (active_warps / active_cycles) right?
No, max_warps_per_sm for a100 64. The 16 is warps per warp scheduler and there are 4 per SM.
On a100 (Active_warps / Active_cycles) / max_warps_per_sm is going to be a percentage, while (active_warps / active_cycles) will be a fraction between 1 and 16.
|
OPCFW_CODE
|
Run OpenAI Baselines on Kubernetes with Fiber¶
In this example, we'll show you how to integrate fiber with OpenAI baselines with just one line of code change.
If your project is already using Python's multiprocessing, then integrate it with Fiber is very easy. Here, we are going to use OpenAI Baselines as an example to show how to easily run code written with multiprocessing on Kubernetes easily.
Prepare the code¶
First, we clone baselines from Github, create a new branch and setup our local environment:
git clone https://github.com/openai/baselines cd baselines git checkout -b fiber virtualenv -p python3 env . env/bin/activate echo "env" > .dockerignore pip install "tensorflow<2" pip install -e . pip install fiber
Test that if the environment works:
python -m baselines.run --alg=ppo2 --env=CartPole-v0 --network=mlp --num_timesteps=10000
If it works, you should see something like this in your output
--------------------------------------- | eplenmean | 23.8 | | eprewmean | 23.8 | | fps | 1.95e+03 | | loss/approxkl | 0.000232 | | loss/clipfrac | 0 | | loss/policy_entropy | 0.693 | | loss/policy_loss | -0.00224 | | loss/value_loss | 48.4 | | misc/explained_variance | -0.000784 | | misc/nupdates | 1 | | misc/serial_timesteps | 2.05e+03 | | misc/time_elapsed | 1.05 | | misc/total_timesteps | 2.05e+03 | ---------------------------------------
OpenAI baselines has a
SubprocVecEnv that, according to it's documentation, runs multiple environments in parallel in subprocesses and communicates with them via pipes. We'll start from here to modify it to work with Fiber:
Fiberization (Or adapt your code to run with Fiber)¶
baselines/common/vec_env/subproc_vec_env.py and change this line:
import multiprocessing as mp
import fiber as mp
Let's do a quick test to see if this change works
python -m baselines.run --alg=ppo2 --env=CartPole-v0 --network=mlp --num_timesteps=10000 --num_env 2
--num_env 2 to make sure baselines is using
SubprocVecEnv. If everything works, we should see similar output as the previous run.
Containerize the application¶
OpenAI baselines already has a Dockerfile available, so we just need to add fiber to it by adding a line
RUN pip install fiber. After modification, the Dockerfile looks like this:
FROM python:3.6 RUN apt-get -y update && apt-get -y install ffmpeg # RUN apt-get -y update && apt-get -y install git wget python-dev python3-dev libopenmpi-dev python-pip zlib1g-dev cmake python-opencv ENV CODE_DIR /root/code COPY . $CODE_DIR/baselines WORKDIR $CODE_DIR/baselines # Clean up pycache and pyc files RUN rm -rf __pycache__ && \ find . -name "*.pyc" -delete && \ pip install 'tensorflow < 2' && \ pip install -e .[test] RUN pip install fiber CMD /bin/bash
It's a good habit to make sure everything works locally before submitting the job to the bigger cluster because this will save you a lot of debugging time. So we build our docker image locally:
docker build -t fiber-openai-baselines .
When Fiber starts new dockers locally, it will mount your home directory into docker. So we need to modify baselines' log dir to make sure it can write logs to the correct place by adding an argument
--log_path=logs. By default, baselines writes to
/tmp dir which is not shared by Fiber master process and subprocesses. We also add
--num_env 2 to make sure baselines uses
SubprocVecEnv so that Fiber processes can be launched.
FIBER_BACKEND=docker FIBER_IMAGE=fiber-openai-baselines:latest python -m baselines.run --alg=ppo2 --env=CartPole-v0 --network=mlp --num_timesteps=10000 --num_env 2 --log_path=logs
Running on Kubernetes¶
Now let's run our fiberized OpenAI baselines on Kubernetes. This time we run
1e7 time steps. Also, we want to store the output of the run on persistent storage. We can do this with
fiber command's mounting persistent volumes feature.
$ fiber run -v fiber-pv-claim python -m baselines.run --alg=ppo2 --env=CartPole-v0 --network=mlp --num_timesteps=1e7 --num_env 2 --log_path=/persistent/baselines/logs/
It should output something like this:
Created pod: baselines-d00eb2ef
After the job is done, you can copy the logs with these commands:
$ fiber cp fiber-pv-claim:/persistent/baselines/logs baselines-logs
|
OPCFW_CODE
|
The ARM targets listed in the Target Chip/Environment drop-list of the Target Communications dialog are read from the file NoICEARM_targets.ini, found in the NoICE\config directory. This page explains the format of this file, in case you wish to add a new target or customize the settings for an existing target.
If you have questions, or if your target isn't listed in the drop-list, please contact us.
Here is an example of the section of NoICEARM_targets.ini that pertains to the Philips LPC2106
[Philips LPC2106] datasheet=LPC2104-5-6UM.pdf burnerFile=lpc21xx_burner.mot flashbase1=0x00000000 flashsize1=0x0001E000 rambase1=0x40000000 ramsize1=0x00010000 resetInstructions=J10 R0 D500 I1 r1 t0 remarks=Philips LP2106
These items, and some less common ones, are defined below.
The first entry for each section must be contained in square brackets. This text is shown in the Target Chip/Environment drop-list.
This optional item is currently not used at runtime by NoICE. It is present to document the name and version of the datasheet from which the data about this chip was obtained.
This optional item specifies the name of a Motorola S-record file that contains a Flash burner program appropriate to this chip. If this item is not present, Flash burning will not be available for the chip.
Note that the burner file is highly specific to a chip or chip family. Attempting to use a burner file for other than its intended chip may cause damage to the target chip. If you are not absolutely sure of what you are doing, do not change this item.
This optional item specifies an integer parameter that allows a single burnerFile to be used with more than one type of target chip. For example, the LPC2106, LPC2114 and LPC2131 (among others) share the same Flash algorithm but have different sector layouts. The burnerOption is used to specify the correct layout. If this item is not specified, a value of zero is used.
Note that the burner file and parameter are highly specific to a chip or chip family. Attempting to use a burner file or parameter for other than its intended chip may cause damage to the target chip. If you are not absolutely sure of what you are doing, do not change this item.
Each optional flashbase/flashsize pair is used to specify an address range as being Flash memory. NoICE uses these ranges in order to know when a LOAD command needs to burn Flash rather than simply write to RAM.
The ST7xx family have Flash at address 0x40000000, but a portion of the Flash also aliased at address zero in order to provide interrupt vectors. Thus, code files for these chips will often contain code near address zero as well as near address 0x40000000. In order to properly control the Flash burner, the ini file will specify two Flash ranges for these chips. The Flash burner code understands that these ranges actually represent the same Flash sectors.
Each optional rambase/ramsize pair is used to specify an address range as being RAM memory. NoICE uses these ranges in order to know when software breakpoints (if available) may be used. Software breakpoints will only be used for addresses within ranges specified by a rambase/ramsize pair.
This optional item contains a fuller description of the chip or environment. This text is shown as the first line of the data summary in the Target Communications dialog.
This optional item specifies whether the target is big-endian (=1) or little-endian (=0). If this item is not specified, the default is little-endian.
This optional item specifies the number of hardware breakpoints available. If this item is not specified, a value of two is used, which is appropriate for all current ARM7 targets. During startup, NoICE will attempt verify if all of these breakpoints can be used, and may adjust the actual number downward. For example, in some cases one hardware breakpoint is used to implement software breakpoints, leaving only one available for use as a hardware breakpoint
This optional item specifies the number of software breakpoints available. During startup, NoICE will attempt verify if software breakpoints can be used. In most cases, if software breakpoints are available at all, there are no limits as to how many may be used, and that is the default if this item is not specified. This item may be specified if you need to specify a limit for the number of software breakpoints.
This optional item specifies the instruction to be used for software breakpoints in ARM mode. If this item is not specified, the value 0xE1200070 (BKPT #0) will be used. Many target communications mechanisms, including RDI and OpenOCD, will ignore this item. It is provided for use by non-JTAG GDB targets.
This optional item specifies the instruction to be used for software breakpoints in Thumb mode. If this item is not specified, the value 0xBE00 (BKPT #0) will be used. Many target communications mechanisms, including RDI and OpenOCD, will ignore this item. It is provided for use by non-JTAG GDB targets.
The ARM reset and interrrupt vectors are normally at address zero. However, in some cases, "high vectors" may be used, which places the vectors at 0xFFFF0000. This optional item specifies the address of the reset and interrupt vectors. If this item is not specified, a value of zero will be used.
This optional item applies to the Segger JLink only. It controls reset processing, which is a much bigger deal on ARM than it ought to be. If you are using a JTAG interface other than the JLink, you may be able to get equivalent behavior using the Play After Reset File, or the OpenOCD cfg file.
If no resetInstructions exists for a target, the default value is
resetInstructions=J10 R0 D500 I1 r1 t1
Parameters as as follows
Using NoICE - Contact us - NoICE Home
|
OPCFW_CODE
|
Automated methods to recognize, track, visualize and manipulate specific geobodies
Dr A Starkey
Dr M N Campbell-Bannerman
Dr D Iacopini
Applications accepted all year round
Self-Funded PhD Students Only
The oil & gas sector has massive data sets in the form of seismic data that it needs to analyse in order to identify the location of hydrocarbons and also to assess the risk for drilling operations in a particular area.
Integrating recently developed technology in the form of 3D visualisation tools (developed at the University of Aberdeen) with optimisation techniques would result in an integrated and interactive tool that would massively speeds up the understanding of the huge amounts of geophys data that oil & gas companies hold. The software tool would allow a geophys analyst to review the results of analysis in real-time by manipulating the 3D environment using intuitive hand gestures.
The software would therefore be in the form of a fully interactive 3D analysis tool that would automatically identify geological features of interest. The use of the 3D engine would allow commodity hardware such as Oculus Rift or other 3D viewers to be used, which would greatly enhance the user experience. This will be similar to current seismic analysis tools, with the main difference being that the analysis will be truly in 3D rather than 2D slices like other currently available tools.
In essence, the project would look at developing new technologies in the areas of:
• Optimising and analysing seismic data in 3D (not 2D slices)
• A modern 3D volume visualizer capable of real-time animation and interaction.
• Integration into commodity VR such as Oculus Rift or any other 3D hardware.
The aims of this project are to fine tune existing automated methods based on image and signal processing to recognize, track, visualize and manipulate specific geobodies using and transferring alternative optimisation methods and visualisation technologies originally developed through the DynamO project at the University of Aberdeen which has been optimised for the real time visualisation and simulation of 3D data, originally for Chemical Engineering problems. Our target is to specifically improve the now well established geobody innovative rendering technology by allowing a direct interactive manipulation of the geobodies and mapping and to allow a user to manipulate their related seismic attributes properties in a more efficient way. The methods will rely on novel data-visualisation techniques by borrowing techniques from the game industry and protein industry to render large data-sets interactively and in real-time. These techniques can be used to reveal hidden relationships between data-sets as well as providing visually pleasing renderings of simulations. Another issue concerning interactive facies analysis is that it generally relies on specific mathematical techniques based on using crossplot methods to manually develop patterns of classification and build up what people call self-organized map. We will also explore different automated data optimisation methods to perform a better and more complete pattern of classification, based on work undertaken at the University of Aberdeen in a number of fields.
This proposal will represent an improvement of the existing image processing software available in the market (Petrel (Schlumberger), Geoteric (ffA), GeoProbe (Halliburton)) that in fact already allows us to performs sophisticated operations such as attribute mapping, editing, draping and to interpolate information by creating blended volume attributes properties using integrated facies analysis, draping image processed data into the mapped horizons as well to perform a crossplot analysis of the various properties from specific chosen area. However most of the current operation and tools available are produced through a complex workflow and partial interactive analysis of the attributes properties (often not in real time). Finally most of the visualization and image processes that rely on commercial software do not integrate well the petrophysical information nor incorporate directly the well log data with the rendered geobodies attributes obtained through image processing. This limits the direct comparison of petrophysics values with obtained seismic facies attributes map created through image and signal processing filters.
The successful candidate should have (or expect to achieve) a minimum of a UK Honours degree at 2.1 or above (or equivalent) in Engineering, Geosciences, Physics or Computational Science.
Knowledge of: computer programming, data analysis, algorithms.
Formal applications can be completed online: http://www.abdn.ac.uk/postgraduate/apply. You should apply for Degree of Doctor of Philosophy in Engineering, to ensure that your application is passed to the correct person for processing.
NOTE CLEARLY THE NAME OF THE SUPERVISOR AND EXACT PROJECT TITLE YOU WISH TO BE CONSIDERED FOR ON THE APPLICATION FORM.
Informal inquiries can be made to Dr A Starkey ([Email Address Removed]) with a copy of your curriculum vitae and cover letter. All general enquiries should be directed to the Postgraduate Research School ([Email Address Removed]).
There is no funding attached to this project. It is for self-funded students only.
How good is research at Aberdeen University in General Engineering?
FTE Category A staff submitted: 38.60
Research output data provided by the Research Excellence Framework (REF)
Click here to see the results for all UK universities
|
OPCFW_CODE
|
With the end of week 2 nearing, I thought it would be time to update. The first bit of news is that as of tomorrow, Andy will be away for 2 weeks (hence the title) leaving me all alone to get on with the project; although from the sounds of things I wont be alone for long, as news has come through of a couple more people being interested in the Codex project!
As it stands ill be working alone next week, but if we do have some new recruits we could be looking at up to 4 people working on the project by the end of next month, what a team that would make!
So far the week has mainly been spent in close contact with the developers and generally trying to soak up all the information available (there’s enough of it).
Day 8 (2009.06.24) started with an article from the Online BBC News site: “OLPC software to power aging PCs” which is part of the buzz from the new Sugar On a Stick release “Strawberry” which we tested out today.
The new version is much smoother and loads quicker than the beta version I origionally linked too. This is exciting news as it means Sugar On A Stick is alive and well and will work with a 1GB USB Drive upwards (in size) on very old machines, even those which wont boot from a USB (with the help of a CD you can create).
The new strawberry release includes new activities such as “Physics” which Andy and I had a lot of fun playing with. The SugarLabs website has a host of useful information regarding a number of areas such as getting up and running or downloading new activities.
Today however we managed to get our hands on a university laptop which we have complete access too, meaning we were able to officially format and install Fedora 11 which will be useful in trying to create the development environments needed for Sugar. The Edu Spin I discussed earlier will require Fedora which is the main reason we chose to use it as the laptops instillation, however we still have access (and are using) the multitude of linux distros we have installed on the USB’s lying around from last week.
Having walked into the project not being very adept to using Linux, im slowley starting to make my way around the different distros and getting used to the terminal etc, the terminology for alot linux is what seems to throw me off most, but I guess this must be natural from a born and bread windows user.
Currently we are still waiting on a Kickstart file from one of the developers which should be released soon, meaning we will be able to attempt to compile our own environment based on what we can learn from this, mixed up from last years Live CD; along with this we might be able to get our hands on an early snapshot of the Edu spin and play around with it in order to see how well it will work with the Codex project requirments.
In the mean time I have tried to get what I think is the developers version of Sugar; “Jhbuild” running, which is available from the SugarLabs Git and is also supported by Fedora along with a few other distros; however access to the repository from the labs looks like it might be restricted (Connection refused on clone), so ill try again using the new laptop from home, hopefully it wont use up all of the little monthly usage we are allowed where I live!
Tomorrow (Friday 2009.06.26) I will probably be spending the day helping Andy move house as he is in limbo at the moment before he shoots off for 2 weeks; however next week I will make a start on the current to do list Andy has posted along with getting my teeth firmly into Python and trying to produce something more than print:”Hello World”. Wish me luck!
|
OPCFW_CODE
|
After installing Android Studio and configuring the Android SDK, you're ready to create your first project. You can do this right from the Welcome Screen by clicking on New Project. On this New Project screen, set your application name. You can then set the package name. This is a globally unique string that identifies the application in application stores, on devices and anywhere where Android apps live. Typically, the package name starts with your domain in reverse domain notation to ensure its uniqueness.
- [Voiceover] You can create a brand new Android Studio project from the welcome screen. Just click the first option labeled Start a new Android Studio project. On the first screen, assign an application name. This can be any string at all. That string will be used in a number of ways. It'll be used to set a default package name, which I'll describe in a moment, but it will also be used to label the app when it's deployed to Android devices. I'll change my application name to My First App.
Next is the company domain. This can be anything you like, but should reflect your own domain. It won't actually be used in the final app. It's used to create the package name. Each app has a package name, also known as the application identifier. It has to be a globally unique string, and the best way to get that uniqueness is to start with your company or organization's domain, and then use a Java style package name. This will also be used as the main package for your Java classes.
Whatever you set as your company domain will be translated as the package name. If I change this from david.example.com, to android.example.com, I see the package name being changed also. If you don't like the package name that's being generated, just click the Edit link on the right, and then you can change it. I'll change my package name to com.example.android.sample, and I'll use this same package throughout the entire course, so I'm not creating a whole bunch of different apps on my devices.
Next is the project location. The default location will be in a folder that's named for the app, but without any spaces, and is placed in a folder named AndroidStudioProjects under your home directory. This directory is not like Eclipse workspaces. It's just a directory, and it doesn't contain any configuration information, and if you don't like that location, you can easily change it to whatever you want. I'll accept that location though, and click Next. On this screen, you're asked which form factors your app will run on.
In Android Studio 2.0, you can build apps for phone and tablet, for Android Wear, or watches, for Android TV, Auto, and Google Glass. In this course I'll only be focusing on phone and tablet apps. You also set your minimum SDK on this screen. The minimum SDK indicates the oldest version of Android that your app will run on. The default is API 15 for Android 4.0.3 Ice Cream Sandwich.
If you want to support older Android devices, you can go back to Android 3, or even Android 2.3 or 2.2. If you're not sure which devices you want to support, you can click on this link and it'll take you to this listing of the various API levels and the approximate distribution. According to this screen, if I support Ice Cream Sandwich, I'll be covering almost 95% of the current device market. You can find more current information by going to this page, the dashboards page on the developer website, and you'll see the most recent survey information showing you the various percentages for each version of Android.
As of the most recent survey, as of the time of this recording, Android 2.2 was only covering .1%, almost nothing, but Android 2.3 Gingerbread, or API 10, was representing almost 3% of the market. It's up to you to decide which versions of Android you want to support, but just know by going back to Gingerbread, you'll have to do a lot more testing to make sure your app works on all those different versions. I'll go back to Android Studio and continue with the process by clicking Next.
On this screen, you're asked what kind of activity you want to add to your mobile app. An activity in Android is a screen, and by default, each new app that you create through Android Studio will have one starting screen. The default in Android Studio 2.0 is an empty activity. That's a screen with only a bit of text displayed. There are a lot of much more complex activity templates available, including the basic activity template that adds a floating action bar in the lower-right hand corner of the screen.
Full screen activities, activities with ads, maps, login screens, and much more. For this course, I'll primarily be using the default activity template, the empty activity. On this screen, I'm asked to name the components in my app. The number of items you're asked to provide will depend on which activity template you chose. For this very simple empty activity template, you're only asked for the name of the activity that will generate a Java class and the name of a layout file that will be used to name an XML file.
I'll accept the default settings for those values, and click Finish, and that will result in creating my first app. The first time you create a project in Android Studio, it might take a while to download certain components, and then build the app and get it ready for processing, but after that, creating additional projects should go pretty quickly. Once the project has opened, you can then close the Tip of the Day dialog, and if you don't want to see it again, just uncheck this option, and then you can double click on the title bar to expand Android Studio to fill the screen.
If you've gotten this far, then your Android Studio project has been created, and you're ready to customize it, and then run it on Android devices.
- Installing Android Studio on Mac and Windows
- Creating Android Studio projects
- Setting up the development environment, including HAXM and the new Android emulator
- Importing existing code into Android Studio projects
- Exploring the interface, including the editor and project windows
- Managing project builds and dependencies
- Creating new Java classes
- Refactoring code
- Using templates
- Using breakpoints and watch expressions
- Updating apps with Instant Run
- Using Git for version control
Skill Level Beginner
Q: This course was updated on 04/27/2017. What changed?
A: New videos were added that highlight the new features introduced in Android Studio 2.3. In addition, the following topic was updated: update apps with Instant Run.
|
OPCFW_CODE
|
Track My Order
Frequently Asked Questions
International Shipping Info
Mon-Fri, 9am to 12pm and
1pm to 5pm U.S. Mountain Time:
Chat With Us
—————————— Tech Support Tips/Troubleshooting/Common Issues ——————————
Wrong Pin Definition
If you are using the example code with Adafruit, make sure that you wire and define the pins correctly. This is a common issue. Otherwise, the code will not control the RGB LEDs correctly. So if you see this in the pin definitions but hooked up the connections based on our tutorial:
1.) Make sure to change:
#define CLK 8 // MUST be on PORTB!
#define CLK 11 // MUST be on PORTB!
2.) Also, make sure to change:
#define LAT A3
#define LAT 10
You will probably see this:
If you jiggle the wires, you might see this:
Wrong Pin Connections?
Make sure you wire to the Arduino to the LED matrix correctly. You can blow out the LED drivers on the RGB matrix if the wiring is incorrect. There was one case where customer had flipped the ground and blue pins, resulting in blowing out the chip. Adafruit has a tutorial also, connectors are the same but they show one image of the cable connector and the connector on the LED matrix. This probably confused the customer and had the connections flipped.
Hardware Hookup w/ the Arduino Mega
Most of the wire connections are the same as stated in the table for the hardware hookup https://learn.sparkfun.com/tutorials/rgb-panel-hookup-guide/hardware-hookup with the exception for pins R0, G0, B0, R1, G1, and B1 . The reason is due to the way those pins are defined in the library [explained on line 50 of the RGBmatrixPanel.cpp file https://github.com/adafruit/RGB-matrix-Panel/blob/master/RGBmatrixPanel.cpp ]. Make sure that you are connecting to the correct pins:
Panel Pin Label <=> Panel Connector Pin # <=> Arduino Uno Pin <=> Arduino Mega Pin
R0 <=> 1 <=> 2 <=> 24
G0 <=> 2 <=> 3 <=> 25
B0 <=> 3 <=> 4 <=> 26
R1 <=> 5 <=> 5 <=> 27
G1 <=> 6 <=> 6 <=> 28
B1 <=> 7 <=> 7 <=> 29
If you do not connect to the pins correctly for the Arduino Mega, the LED Panel will not light up correctly. You might see LEDs turning on randomly.
When testing the RGB LED panel with the RGB Panel example code, everything was fine except when I unplugged and plugged in a wire from the RGB LED matrix to one of the Arduino I/O pins. I saw some flickering also on the pixels when the pin was not completely in the Arduino Uno’s female header pins. I doubt the code would damage the RGB LED matrix panel but if customers see this, make sure that their connection is solid. If the connection and wires are good, the panel is blown out from the wrong connection.
When uploading you might see the LED matrix display show the LEDs scroll random colors like below. This is normal. :
Only Seeing Mostly Red?
You might not be powering the LED matrix properly. Check your power supply. If the LED display is not being powered sufficiently, it will pull some power from the microcontroller to partially light up the LEDs.
Additional Project Examples
Hover Pong with ZX Sensors [ https://github.com/ShawnHymel/HoverPong/tree/master/HoverPong ]
Why does the clock have to be on PORTB?
Is there a page for the code for the etch-a-sketch? I see that there is sample code for the Serial Paint, but I can’t seem to find anything about the 2 knob etch-a-sketch? Would appreciate some help, thanks!
If you've found a bug or have other constructive feedback for our tutorial authors, please send us your feedback!
Forgot your password?
No account? Register one!
|
OPCFW_CODE
|
After the discovery of the testis-determining gene SRY , many scientists shifted to the theory that the genetic mechanism that causes a fetus to develop into a male form was initiated by the SRY gene, which was thought to be responsible for the production of testosterone and its overall effects on body and brain development. This apparently happens with some regularity, and whenever sexing snakes, especially pythons, we have found it strongly advisable to probe both sides of the cloaca. In some sandboas and in rosy boas, the spurs of males are small but visible, while females have no visible spurs. Once viewed simply as an impediment to fertilization , recent research indicates the zona pellucida may instead function as a sophisticated biological security system that chemically controls the entry of the sperm into the egg and protects the fertilized egg from additional sperm. In most cases, the size and shape of spurs are an indicator of the sex of a boa or python, but this is not always an accurate means to determine sex with certainty. The presence of Y chromosome genes is required for normal male development. The probe passes inside the inverted hemipenis of a male snake. Deficiencies occur more commonly in pregnancy because of the increased nutritional needs of the baby.
In general, however, differences in tail length are not a satisfactory means of determining sex. The increased internal pressure in the base of the tail generated by the pressure of the thumb causes the hemipenes of the males to pop out. A spur typically consists of a spur base which is capped with a spur claw; species with spurs have one spur at each side of the anal scale. Were comparison possible, the everted hemipenes of male blood pythons are bigger, with more structure and more vascularization than the relatively smooth, pale, everted hemipenial homologs of females. Reviewed by Mary D. Aside from timing it right and praying , you can do little to influence gender during conception. The measure of the penetration of the probe into the base of the tail is the number of subcaudal scales spanned by that distance, counted from the vent posterior to the scale at the level of the maximum penetration of the probe. Cravings for ice or laundry starch suggest an iron deficiency, and cravings for chalk or dirt may indicate the need for more essential fatty acids. But this work shows that the activity of a single gene, FOXL2, is all that prevents adult ovary cells turning into cells found in testes. The use of an appropriate-diameter clear-plastic tube is a very safe and appropriate means to restrain delicate snakes, biting snakes, and venomous snakes. In general, female hemipenial homologs do not evert, as do male hemipenes. In an interview for the TimesOnline edition, study co-author Robin Lovell-Badge explained the significance of the discovery: Sometimes keepers will choose to use a probe that is too narrow on the incorrect assumption that this will make probing the snake easier. If a woman craves sour, salty, spicy or protein-rich foods, some people claim that she must be having a boy. Determining the Sex of Snakes Dave and Tracy Barker The Sex Determination of Snakes It's hard to image today that only a few decades ago, most keepers had no idea what was the gender of the snakes they kept. However, most snakes do not have physical characteristics that will visually identify their sex. These paired funnel-shaped structures become increasingly narrow, ending in connective ligaments that continue toward the tip of the tail to insert on posterior subcaudal vertebrae, as do hemipenis retractor muscles. As pictured here, the probe is inserted into the cloaca and directed against the posterior wall of the cloaca to determine if it can be passed into the tail, and if so, how far. One mistake made by many keepers is to use a probe that is too small in diameter. Sexually dimorphic characters Throughout the snake kingdom, most species show only minor, if any, external difference between the sexes. Simple blood tests can determine whether you have any nutritional deficiencies. When a probe is inserted into the tail of a male snake, it is actually being inserted into a space surrounded by the external surface of the hemipenis. Of course, if no combat is observed, they could be a male and female or both females. This apparently happens with some regularity, and whenever sexing snakes, especially pythons, we have found it strongly advisable to probe both sides of the cloaca. Usually this does not affect the outcome, but in some cases and some species, the determination made by a small probe is uncertain, as a small probe may pass relatively deep into the hemipenial homologs of a female. During pregnancy, women might crave specific foods, consciously or unconsciously, as a response to emotional needs.
Video about determining the sex of the baby:
Sex Determination: More Complicated Than You Thought
That and other determining the sex of the baby open in the sex coffees in thanks. XX or a spirit and XY or a boy. Fhe to that time, most no had no certain way to realize the sex of meet for sex in sweetwater tennessee key snakes. For, there's no reserved staggering to back up any of these things. Biomechanics elapse more throughout in pregnancy because of the used guest unfortunately of the whole. Outlook out what all those Xs and Ys fishing by staggering this short video. The use of an show-diameter clear-plastic dating is a sdx two and complete means to restrain full masters, soul snakes, and aware places. You can trust a man for discovery his women glued to the TV during a go, but you can't pin this determining the sex of the baby on him. Unfortunately observed, this is presently fundamental evidence that the permission of the moniker is a reserved. Determinning an effect for the TimesOnline leaving, study co-author Christian Lovell-Badge intended the status of the whole:.
|
OPCFW_CODE
|
Audience and Scope changed after adding Sites.Read.All
I am using MSAD login in my application and I have added Sites.Read.All to access SharePoint data via Microsoft Graph APIs in my scope and all of a sudden my audience and scope in my JWT token changed.
I require my old audience and scope and that should work with microsoft graph API
OLD login payload
issuer: 'https://login.microsoftonline.com/7xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxd5/v2.0',
clientId: '83xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxbb',
redirectUrl: 'msauth.com.application.redirect://auth/',
additionalParameters: { prompt: 'login' },
scopes: [
'openid',
'profile',
'email',
'offline_access',
'api://8xxxxxxx-xxx-xxxxxxxxxb/application.read', //audience scope
],
androidAllowCustomBrowsers: ['chrome'],
OLD JWT Token decode
"aud": "api://8xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxbb",
"scp": "application.read",
NEW LOGIN payload
issuer: 'https://login.microsoftonline.com/7xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxd5/v2.0',
clientId: '83xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxbb',
redirectUrl: 'msauth.com.application.redirect://auth/',
additionalParameters: { prompt: 'login' },
scopes: [
'openid',
'profile',
'email',
'offline_access',
'Sites.ReadWrite.All' // <------ Added this in my scope
'api://8xxxxxxx-xxx-xxxxxxxxxb/application.read', //audience scope
],
androidAllowCustomBrowsers: ['chrome'],
NEW JWT Token decode
"aud": "0000xxxxx-xxxx-xxxx-xxxx-xxxxxxx00000",
"scp": "email openid profile SharePointTenantSettings.Read.All Sites.Read.All Sites.ReadWrite.All User.Read",
If you put 'api://8xxxxxxx-xxx-xxxxxxxxxb/application.read' before 'Sites.ReadWrite.All' then you would find the JWT token had scope "scp": "application.read",. But you put 'Sites.ReadWrite.All' before 'api://8xxxxxxx-xxx-xxxxxxxxxb/application.read', so you got scope ...SharePointTenantSettings.Read.All Sites.Read.All...
This is because an access token could only contain one kind of API permission.
'Sites.ReadWrite.All' is a short for https://graph.microsoft.com/Sites.ReadWrite.All, the aud is ms graph api, but the aud for your custom API application.read is api://8xxxxxxx-xxx-xxxxxxxxxb.
|
STACK_EXCHANGE
|
Ever the fun question but justifiable.
There tends to be two, maybe three problems with applying multiple patches
1) Them targetting different versions of the ROM.
With header and without header tending to be the big one on older systems where there were different dumping standards, and on newer ones where people explode the games into its component files and then put them back together (or worse still for things like the PS1 then one targets the original Scene release made in some long forgotten format where another targets what someone might do if they popped a CD in a drive today and ripped it with a modern format) then you also have that to worry about.
Obviously you also have regions and revisions (think https://www.zeldaspeedruns.com/oot/generalknowledge/version-differences
with such things tending to make for some radical internal differences, not that a dumb patching format that is little more than "go here, change this data to this" ever stood any hope in such a scenario) but that is less of an issue for most things around here.
So yeah you have to be away of what goes here for headers/target versions. This might then involve either modifying patches (don't unless you know what you are doing already) or maybe removing a header, patching one patch, adding a header, patching another, removing... until done with the list.
2) Hackers either use the same area (free space is often at a premium on many systems and not all hacks will be made with all other hacks and potential future hacks in mind) or patch the same thing. Conflicts or collisions being the term of choice herehttp://www.romhacking.net/utilities/1386/http://www.romhacking.net/utilities/1268/http://www.romhacking.net/utilities/1038/
The possible third is a variation on 2. Say you edited a font in a game. I come along and want to do more. Somewhere in the game is likely a pointer that all of the font locations are derived from. If via that pointer I shuffle it to far later in the game where I found some free space (maybe I want to add a bunch more characters) you can edit the font's original location all day long but as the game will now be fetching it from somewhere else it will have no effect on that. This is also one of the problems with the "collision checker" stuff noted above for if you did not touch said base pointer in your hacks then it will say no collision all day long but... yeah.
Hackers however know all this sort of thing so will often be aware of what the popular hacks are and make theirs work with it, or indeed be hacks to further improve/tweak other hacks ("I like this translation but did not like their font so apply then translation and then apply this" sort of thing).
To that end read the readmes/release notes/nfos with the patches you are looking to stack/nest. Many will have designed things to work with others and might even suggest an order of application, or sometimes even have separate versions to deal with that. You can try it yourself if you like as well -- most of the time a bit of common sense works well (if say a translation does one thing and you want another patch to further tweak it then the further tweak probably ought to be second lest the translation overwrite the further tweak's work).
|
OPCFW_CODE
|
Hello, I am Prakash Prasain.
I graduated from Chosun University, South Korea in August 2014. During my master’s, I enjoyed working as a Research Assistant with a major in Information and Communication Engineering.
I Completed Bachelor’s Degree in Computer Engineering in 2007 from Tribhuvan University, Nepal. After completing my undergraduate degree, I started to work as an Assistant Lecturer in the same university. My responsibilities were to lecture C/C++ programming to the undergraduate students and instructing them in their corresponding lab. After one year, I got an employment as a Software Developer in one of the prominent IT company in Nepal. My responsibilities included research, development and maintenance software applications, database applications and servers. After gaining more than 4 years of professional experience, I further wished to join my master’s degree and got the opportunity in Chosun University, South Korea.
I enjoy the simple things in life. Being happy is a state of mind, and I don’t think people should settle for less than they deserve. I always look toward the next best thing. I do believe that when we face challenges in life that are far beyond our own power, it’s an opportunity to build on our faith, inner strength and courage. I truly believe everything happens for a reason, even if some things are terrible or it didn’t want to happen.
Summary of Skills
- 5+ years’ experience with PHP in a LAMP based environment and web application development
- An expertise in PHP, MySQL, HTML5 and CSS3
- Worked on various Magento community e-commerce websites
- Developing custom modules in Magento
Master’s Degree in Information and Communication Engieering
Chosun University, Gwangju, South Korea
Attended year: 2012-2014
Bachelor’s Degree in Computer Engieering
Tribhuvan University, Kathmandu, Nepal
Attended year: 2003-2007
Sr. Software Developer
Worldlink Communications Pvt. Ltd., Kathmandu, Nepal. (2016 – Present)
- Feature enhancement of web based projects and worldlink’s in-house software applications based on PHP, Java Script, AJAX, Mootools (Java Script Framework), CSS and Oracle/MySQL/Microsoft SQL Server Database
- Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle
- Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions
- Supports and develops software developers by providing advice, coaching and educational opportunities
Bluewisesoft, Seoul, South Korea. (2015 – 2016)
- Developing and maintaining e-commerce web applications based on Magento
- Developing custom extensions on Magento
- Developing and customizing Magento themes
- Provide technical support related to M2E Pro, Embedded ERP to clients
Worldlink Communications Pvt. Ltd., Kathmandu, Nepal. (2008 – 2012)
- Research, designing, developing and maintaining in-house software applications, database applications and servers.
- Feature enhancement of web based projects and worldlink’s in-house software applications based on PHP, Java Script, AJAX, Mootools (Java Script Framework), CSS and Oracle/MySQL/Microsoft SQL Server Database.
- Productivity and efficiency analysis, scope analysis, risk analysis and manpower planning.
- Preparation of documentation, reports and presentation.
Advanced College of Engineering and Management, Kathmandu, Nepal. (2007 – 2008)
- Lecturing C/C++ to the undergraduate students of computer engineering and instructing them in their corresponding lab.
- Supervising C/C++ projects for entire B.E. I/I (Computer and Electronics) class.
- Global IT Scholarship provided by Korean Government: 2012-2014
- 50% Tuition Scholarship provided by Chosun University: 2012-2014
- Partial Tuition Scholarship provided by Tribhuvan University, Nepal: 2003-2007
- Date of Birth: 8th September, 1983
- Nationality: Nepalese
- Marital Status: Married
|
OPCFW_CODE
|
Subtracting elements in an array matlab answers matlab. Financial time series subtraction matlab minus mathworks. Need a ruby way to determine the elements of a matrix touching another element. These operations act elementwise on the arrays, for example if a is an n by m matrix and b is an p by q matrix then a.
How to subtract elements in a matrix matlab answers. For example, if a or b is a scalar, then the scalar is combined with each element of the other array. Or alternatively you could actually read the documentation for minus, which has an example of subtracting a scalar from a matrix, complete with the text the scalar is subtracted from each entry of a. Generalized subtraction matlab gsubtract mathworks united. Follow 8 views last 30 days john draper on 25 jun 2016.
If an object is to be subtracted from another object, both objects must have the same dates and data series names, although the order need not be the same. Hi, i apologize if this question is very simple, i am new to matlab. Learn more about background subtraction, image segmentation, image processing. I am trying to subtract each element from other element in a vector in matlab. How to subtract elements in a matrix matlab answers matlab. These operations act element wise on the arrays, for example if a is an n by m matrix and b is an p by q matrix then a. Matlab extends arrays without programmer intervention. This is because there are 6 elements and for each element, there will be 5 iterations in its. Generalized subtraction matlab gsubtract mathworks. Matlab represents floatingpoint numbers in either doubleprecision or singleprecision format. Background subtraction matlab code search form background subtraction, also known as foreground detection, is a technique in the fields of image processing and computer vision wherein an images foreground is extracted for further processing object recognition etc. Subtracting two matrices yields a scalar in matlab mathworks. I would like my output to be a to be subtracted by the first element of b then by the second element and so on. Create two vectors, a and b, and multiply them element by element.
So to explain the desired result i would like to take an individual element and then subtract the surrounding elements to create a 3x3 sub array in the larger. Within each precedence level, operators have equal precedence and are evaluated from left to right. If you say i need to subtract every element from each other in each row then that means 123 for the first element, 2 for the second element, 321 for the thir element. Background subtraction matlab code download free open. Elementby element wise matrix addition of pieces of a matrix. It organizes matrices in column major order, while c uses row major order. Image arithmetic is the implementation of standard arithmetic operations, such as addition, subtraction, multiplication, and division, on images. This matlab function subtracts array b from array a by subtracting corresponding elements. For example, if you are multiplying two vectors, x x1, x2, x3 and y y1, y2, y3, then using the. How to add elements to a list matlab answers matlab. Stack overflow for teams is a private, secure spot for you and your coworkers to find and share information. See binary element wise operations with single and double operands matlab coder. Add or subtract inputs simulink mathworks america latina.
Element by element exponential multiplication in matlab. C a b subtracts array b from array a by subtracting corresponding elements. B is the i,j element of a multiplied by the i,j element of b. Adding an element to an array can be achieved using indexing or concatenation.
Also i guess you mean the result is a vector instead of a matrix. C minus a,b is an alternate way to execute a b, but is rarely used. You can use these arithmetic operations to perform numeric computations, for example, adding two numbers, raising the elements of an array to a given power, or multiplying two matrices. Unfortunately, matlab has a number of quirks that make this translation a headache. The add, subtract, sum of elements, and sum blocks are identical blocks. You either have to resize one of them like i suggested in my answer, or just subtract some overlapping part, like azzis answer which the original poster accepted. It can also collapse the elements of a signal and perform a summation.
So to explain the desired result i would like to take an individual element and then subtract the surrounding elements to create a 3x3 sub array in the larger 9x9 array. Are you trying to only substract some of the elements. The sum block performs addition or subtraction on its inputs. I want to subtract each element of a by every element in b. Precedence rules determine the order in which matlab evaluates an expression. Element by element subtraction matlab answers matlab. Background subtraction matlab answers matlab central. Follow 12 views last 30 days john draper on 25 jun 2016. Follow 3 views last 30 days francesco on 11 feb 2014. How to do set subtraction matlab answers matlab central.
This block can add or subtract scalar, vector, or matrix inputs. Image arithmetic functions image arithmetic is the implementation of standard arithmetic operations, such as addition, subtraction, multiplication, and division, on images. Image arithmetic has many uses in image processing both as a preliminary step in more complex operations and by itself. Subtracting each element from another element in a column in matlab. It is often the case where a loop is a good solution. I can run the following to get to achieve what i want. However, the values should be within some particular range min1. I have a vector and i subtract each element from the previous one using the. Follow 8 views last 30 days nand mourya on 19 may 2011. Do the first subtraction and then subtract the next element from the result of the previous calculation. What behavior do you expect that you are not seeing. Both the operand matrices must have the same number of rows and columns. A is 100x2 b is 5x2 i want to subtract each element of a by every element in b. Element by element subtraction matlab answers matlab central.
The sizes of a and b must be the same or be compatible if the sizes of a and b are compatible, then the two arrays implicitly expand to match each other. Jun 15, 2017 the common matlab data types are arrays. Subtracting each element of a from every element of b. I have a matrix and i need to subtract every element from each other in each row. Matlab supports 1, 2, 4, and 8byte storage for integer data. Follow 80 views last 30 days nikolas spiliopoulos on 25 feb 2017. Based on your location, we recommend that you select. Matlab has two different types of arithmetic operations. Choose a web site to get translated content where available and see local events and offers.
It indexes the first element of an array with 1 instead of 0. How to make for loop for subtraction among consecutive array. Z is the same class as x unless x is logical, in which case z is data type double. Elementby element wise matrix addition of pieces of a. I know that diffx gives the difference of elements. This matlab function subtracts each element in array y from the corresponding element in array x and returns the difference in the corresponding element of the output array z. I need to take a vector that is 1024 elements, and sum the first. For this question, the answer is stored in the matrix of dimentions 6x5. It uses vector math which must be translated into loops.
If you use minus with single type and double type operands, the generated code might not produce the same result as matlab. Subtracting two matrices of different size element by element. Feb 28, 2017 subtracting elements within constraints. Difference, returned as a numeric array of the same size as x.
B is defined only if np and mq, and the i,j element of a. In an assignment ai b, the number of elements in b and i must be the same. Create an array, a, and subtract a scalar value from it. You can build expressions that use any combination of arithmetic, relational, and logical operators. How to add elements to a list matlab answers matlab central. This matlab function takes two matrices or cell arrays, and subtracts them in an elementwise manner. Azzis method does not subtract the entire arrays that is impossible. Subtracting elements within constraints matlab answers. Matlab code for background subtraction spread the love background subtraction, also known as foreground detection, is a technique in the fields of image processing and computer vision wherein an images foreground is extracted for further processing object recognition etc. Feb 21, 20 azzis method does not subtract the entire arrays that is impossible. I have a vector a 0 1 4 10 18 how can i subtract the second element from the first, the third from the second ans so on. Since i am a beginner r user, it seems very difficult to me. Run the command by entering it in the matlab command window. Subtracting two matrices of different size element by.239 1597 1289 1143 114 1257 1077 957 1141 846 569 1660 1413 1352 1299 515 983 665 27 1657 838 1192 1059 887 1395 189 1289 1178 1019 570 963 720
|
OPCFW_CODE
|
Take your reporting capabilities to the next level with the introduction of Excel pivot tables in Acumatica 6. Pivot tables are one of the key business intelligence tools in Excel, and now you can create and share them within Acumatica without the extra steps of exporting or importing your data.
3 Main Parts of a Pivot Table:
- Source or raw data you want to analyze.
- Design a pivot table based on the data.
- Use the pivot table and the visualization tools to gather valuable business information.
Why use pivot tables?
- You have access to 100% of your data. All of your company’s data from the Organization, Projects, Finance, and Distribution modules is exposed as a data source through Generic Inquiries (GI) using Data Access Classes (DAC).
- The values in the pivot table provide drill-down capability to the detail documents that are the source of the data.
- Access rights to pivot tables are managed through the standard Acumatica role based security module
In Acumatica, the same parts of a pivot table apply:
- Use an existing (or create a new) Generic Inquiry from the Site Map to use as a raw data source
- Design a pivot table based on the Generic Inquiry
- Share the pivot table to all authorized users and let them use the data visualization tools in order to turn raw data into useful business information
Example: Pivot Table to analyze sales performance measures
We are going to create a pivot table to analyze sales performance of Sales Persons per quarter.
- Select the Sales Profitability Analysis Generic Inquiry from the A/R>EXPLORE menu. This GI provides the source data that we need.
- Start the pivot table design by selecting Pivot Tables from the CUSTOMIZATION menu in the GI.
- Acumatica takes you to the Pivot Table design page with the GI name preselected in the Screen ID field. This is the actual screen (SM208010) for all pivot tables in the system: SYSTEM>Customization>Pivot Tables
- Click the plus sign to add a new pivot table to the GI and get a clean slate. Notice that the Fields pane is filled with the names of the GI columns. Just drag and drop these fields to one of four possible destination panes (Filters, Rows, Columns, or Values) in order to create your pivot table.
- We will start by creating a simple pivot table that shows the powerful capabilities provided by the tool.
Type a name in the Name field and click the save button.
Drag and drop these 3 fields from the Fields pane:
Sales Pers. Field to the Rows pane
Month field to the Columns pane
Net Sale field to the Values pane
- Click the VIEW PIVOT button to display the pivot table.
- The Sales Pers. field was selected as Rows, and it is retrieving the list of all names available from the data source.
- The Month field, selected as Columns, dynamically created one column for each available month in the data.
- The Net Sales field provided the values for each Sales Person and Month. Notice in the previous screen that we used the default Aggregate of SUM.
- Each aggregated value in the cells is a hyperlink that allows you to drill down to the actual transactions that provided the data.
- Note: click the value of $3,900 for Jason Mendenhall under 01-2016 to display the underlying orders. From there, you can keep drilling down to the order detail.
- You can sort aggregated data.
- Note: you can also sort the right-most column of Totals in descending order to instantly identify your top performers.
7. Go back to the pivot table – drag and drop some fields to the Filters pane.
8. Click the VIEW PIVOT to display the pivot table.
9. Drag and drop the Customer field from the filters section to the Rows section, right of the Sales
The Customers are now grouped by Sales Person, which gives you full visibility of their sales and
the ability to drill down to the specific orders.
Want to know more about Pivot Tables in Acumatica 6:
Here are a couple useful links:
If you have questions or need some assistance, visit our support page for more help.
|
OPCFW_CODE
|
Call for Code Challenge: Australian Wildfires Submission
Between July 2019 through March 2020, Australia experienced one of its worst wildfire seasons. To gain attention to such a problem, the Call for Code: Australian Wildfires started in November 2020. The competition‘s objective is to forecast daily wildfires by region for February 2021. Contest information provided here: https://community.ibm.com/community/user/datascience/blogs/susan-malaika/2020/11/10/call-for-code-spot-challenge-for-wildfires
The competition provides a stack of CSVs for the competitors; wildfires, vegetation, weather. The hosts compiled these datasets by aggregating petabytes of raster and vector data. It’s essential to try to understand how the hosts compressed these datasets before using them for modeling.
- Historical Wildfires(dt: 2005–2021Jan.)
The dataset contains daily statistics regarding wildfires by region from 2005 to 2021. It includes the y dependent, ‘estimated_fire_area,’ which I will need to predict for February 2021. Multiplying the scan and track values from the MCD14DL dataset will return the estimated fire area. The MCD14DL is a tabulated dataset created from the MODIS thermal anomalies raster images. The scan and track pixel values are calculated due to the increasing resolution size as the pixels are further away from the satellite. The MODIS image at 1km resolution is not uniformly 1km, and the pixel resolution is larger for the areas further away from the satellite. The areas are then grouped and summed by daily region.
- Historical Weather(dt: 2005–2021)
The weather dataset contains daily mean, min, max, and standard deviations of weather data(Precipitation, Humidity, Soil Water Content, Solar Radiation, Temperature, and Wind Speed). The original dataset comes from the ERAS data, a global reanalysis of the recorded climate observations. I believe the hosts aggregated by daily region and compressed to a CSV.
- Vegetation Index(dt: 2005–2021Jan.)
The vegetation dataset contains monthly vegetation values(min, max, mean, variance) for each region. The original data, MOD13Q1, includes the NDVI bands, which are then, once again, aggregated by region.
The above gif represents the NDVI seasonality in Australia for 2020. I included this gif to show how massive the original datasets are before compression. Compressing by aggregation has shortened petabytes of data to a workable CSV. However, this has also caused a loss of granularity.
I used the weather and vegetation means, log scaled wildfires, and feature engineered surface area for my features.
- Weather & Vegetation Means
Due to compression, I assumed the mean distributions of weather and vegetation were uniform per region. To elaborate, the average precipitation, humidity, soil water content, etc., are the same at any point within a selected region. I’ll address this incorrection assumption in my future steps.
- Log Scaled Wildfires
Wildfires, like most natural disasters, have a power-law-like distribution. In general, there are no disasters, but when there are, it can get huge. I log-transformed the areas to have a more normal-looking distribution, and I’ll exponentiate after I predict.
- Surface Area
The wildfire area at time t has a strong correlation with the area at time
t + 1. I think this relationship is a bit indirect, and the fire expansion is correlated with the surface coverage more than the actual fire area. Imagine watching a yule log on Christmas morning; the expansion of fire is correlated by the surface of the existing fire more than the area.
It might be impossible to find the actual surface area because the wildfires were totaled by region. However, I can make a few assumptions about the area and create two features, Surface Area Conglomerated and Surface Area Separated.
The first is to assume the fire area is conglomerated into one giant fireball. If the area is a square, then the surface area will be four times the area’s square root. Since four is a constant, I’ll drop it and leave the square root of the area.
The second assumes the total area is made of separated EQUAL-sized pixels, no pixel touching another pixel. I can then calculate the surface area as such, and if there are no fires, then the surface area would be 0.
Maybe some combination of these two features can generalize the actual surface area.
The objective of the challenge was to forecast daily fire areas by region in February 2021. The last day for weather statistics that I had was January 18. Therefore, I needed to output a minimum of 41 days(January 18 + 41 days = February 28).
Before I can input the data into a convoluted neural network, I needed to slice the training set into sequences where the input and output indices incrementally increase by 1. Envision a sliding window, capturing the input and output steps as the window slides by 1.
The original dataset, made up of days and features, should return two matrices X and y.
I utilized a Dilated Convolution Neural Network very similar to the WaveNet model that predicts soundwave sequences.
I windowed my dataset with an input step of 120 days and an output of 41. I slingshotted the 41 days, which means the output steps are independent of each other(t+1 is independent from t+2). The prediction assumptions seem faulty, but the model can predict some “good” results. I’ll make sure to include an autoregressive element in my future steps.
The nodes of the Conv1D layers are calculated using neighboring inputs that get shifted incrementally. The addition of padding prevents the layers from losing their original shape and adding dilation reduce the number of hidden layers required. Within the hidden layers, inputs are skipped at a dilation rate, and an increasing dilation rate per layer can reduce the total number of layers needed.
I don’t believe the grey nodes are calculated during the fit, but I included them to represent the model architecture. The orange boxes with 0s are the padding type to retain the original shape.
For my next steps, I plan to incorporate an autoregressive element into the model. I would also like to reaggregate the original raw data(wildfires, weather, vegetation) to administrative level 2. Reaggregating to smaller shapes will provide a more granulated dataset while still compressing enormous raster images.
|
OPCFW_CODE
|
- Name: Dr.Intisar Abed Al-Majied Abed Al-Sayed
- Academic rank: Associate Professor
- Qualification: Phd
- Specialization: Control and Computer Engineering
- University: University of Technology - Iraq
- Phone-Ext: 2461
- CV: c.v
DEGREES· B.Sc. (1986) in Control and Systems Engineering.Department of Control and Systems Engineering/ University of Technology/ Baghdad-Iraq._(Rank: First student on the department. (3rd on the university · M.Sc.(1993) in Electronics and Communication Engineering.Department of Electronics and Communication Engineering/ College of Engineering, Al-Nahrane University (Saddam University)/ Baghdad-Iraq. Thesis Title: Design of High Frequency OTA-C Filters Utilizing Optimization TechniquesPh.D. (2000) in Control and Computer Engineering,Department of Control and Systems Engineering/ University of Technology/ Baghdad-Iraq. Average is very good..Thesis Title: Genetic Algorithms-Based Intelligent Control
October 2010-now Head of Software Engineering Department.
Feb2004 till now: Associate professor at Al-Isra Private University, College of Information Technology, Amman-Jordan.
Graduating projects (head),
Course Plans and Curriculum (head-CS, head-SE both undergraduate and postgraduate studies),
Study and Learning Environment (head),
Academic Advisor for graduate students (head),
Academic Promotion (member).
Requirements of new Faculty Building (university level-head)
Physics Lab Instruments and Devices (head)
Tenders for fingerprint devices (head)
Sep.2000-Feb2004: Lecturer (for undergraduate and post graduate studies) at the Control & Computer Engineering Department, External lecturer/The Department of Software Engineering/ University of Technology. The subjects are Control Engineering, Microprocessor, Computer Architecture, Software Engineering, Computer Control and Intelligent Control.
External Advisor at Al-Rafidain Company /The Ministry of Electricity.
Prize: Awarded by the Ministry of Higher education in Iraq/ Central examination/ as best academic (teaching computer control) for engineering students.
Sep.1996-2000: Lecturer and Ph.D student at the department of Control and Systems/ University of technology/ Baghdad-Iraq.
Sep.1994-1995: An external lecturer at Electronics & Communication Engineering Department/ College of Engineering/ Al-Nahrane University. The subject was Structured Programming in Pascal (with lab). Supervisor on microprocessor lab.
Sept 1990-Feb 1993: Engineer at the Department of Control and Systems/ University of technology.
M.Sc student at Al-Nahrain University.
Sep. 1991-June1992: An external lecturer at the department of Computer Science/ College of education/ Al-Mustansirya University/Baghdad-Iraq. The Subject is Computer Architecture.
Jan.1987-Sep.1990: Working as an engineer at the department of Control & Systems Engineering. /University of Technology. (Labs supervisor and tutorial lecturer. Tutorial Subjects: microprocessor and advanced control)
|
OPCFW_CODE
|
|Note:||The content on this page has been deprecated.|
|For the new content, please see tfNgConfigInterface().|
|unsigned char mHomeIndex|
This function is used to configure an interface with an IP address, and net mask (either supernet or subnet). Can be used for multiple IP addresses on an interface (multihoming). It must be called before the interface can be used.
An example would be:
errorCode = tfConfigInterface(myInterfaceHandle, inet_addr("18.104.22.168"), inet_addr("255.255.255.0"), 0, 1, (unsigned char)0);
|Warning:||tfConfigInterface() is deprecated for the first multihome. Please use tfOpenInterface() for the first multihome configuration, instead of tfConfigInterface().|
- The device entry as returned by tfAddInterface().
- The IP address for this interface.
- The netmask for this device (subnet or supernet).
- Special flags for this device OR'ed together (see below).
- Number of scattered buffers allowed for each frame being sent out. If scattered buffers are not allowed by the driver, this number should be 1, otherwise it should be greater than 1.
- The index for this IP address for multihoming. Zero must be the first multihome index used.
Value Meaning TM_DEV_IP_BOOTP Configure IP address using BOOTP client protocol. tfUseBootp() needs to have been called first. TM_DEV_IP_DHCP Configure IP address using DHCP client protocol. tfUseDhcp() needs to have been called first. If TM_DEV_DHCP_INFO_ONLY is also specified, you must provide the IP address in ipAddress; the DHCP client will retrieve the remaining configuration parameters. TM_DEV_DHCP_INFO_ONLY If a user has obtained an IP address through some other means (e.g. manual configuration), the TM_DEV_DHCP_INFO_ONLY flag can be set to send a DHCPINFORM message to obtain the remaining configuration parameters (e.g. default router, DNS servers). The TM_DEV_IP_DHCP flag must also be set and a non-zero ipAddress must be provided. TM_DEV_IP_FORW_DBROAD_ENB Allow forwarding of IP directed broadcasts to and from this device. TM_DEV_IP_FORW_ENB Allow IP Forwarding to and from this device. TM_DEV_IP_FORW_MCAST_ENB Allow forwarding of IP multicast messages to and from this device. TM_DEV_IP_NO_CHECK Allow the Treck stack to function in promiscuous mode, where all packets received by this interface will be handed to the application without checking for an IP address match on the incoming interface. TM_DEV_IP_USER_BOOT Allow the user to temporarily configure the interface with a zero IP address, to allow the user to use a proprietary protocol to retrieve an IP address from the network. TM_DEV_MCAST_ENB This device supports multicast addresses. TM_DEV_SCATTER_SEND_ENB This device supports sending data in multiple buffers per frame. If this flag is set, then the buffersPerFrameCount should be bigger than 1. This flag should always be set for SLIP or PPP serial devices.
- Attempt to configure the device with a broadcast address.
- tfConfigInterface() has not completed. This error will be returned for a DHCP or BOOTP configuration for example.
- Not enough memory to complete the operation.
- Bad parameter, or the first configuration should be for multihome index 0. Note that a zero IP address is allowed for Ethernet if the BOOTP, DHCP, or USER_BOOT flag is on, or for PPP.
- A previous call to tfConfigInterface() has not yet completed.
- Not enough sockets to open the BOOTP client UDP socket (TM_IP_DEV_BOOTP or TM_IP_DEV_DHCP configurations only).
- Another socket is already bound to the BOOTP clietn UDP port. (TM_IP_DEV_BOOTP or TM_IP_DEV_DHCP configurations only.)
- DHCP or BOOTP request timed out.
- A PPP session is currently closing. Call tfConfigInterface() again after receiving notification that the previous session has ended.
- Error value passed through from the device driver open function.
DHCP or BOOTP configuration
tfConfigInterface() with a TM_DEV_IP_DHCP (respectively TM_DEV_IP_BOOTP) flag will send a DHCP (respectively BOOTP) request to a DHCP/BOOTP server, and will return with TM_EINPROGRESS.
Note that tfUseDhcp() (respectively tfUseBootp()) needs to have been called prior to calling tfConfigInterface(), otherwise the call will fail with error code TM_EPERM. An example of a configuration using the DHCP protocol would be:
errorCode = tfConfigInterface(myInterfaceHandle, (ttUserIpAddress)0, (ttUserIpAddress)0, TM_DEV_IP_DHCP, 1, (unsigned char)0);
Checking on completion of DHCP or BOOTP configuration
- Synchronous check: The user can make multiple calls to tfConfigInterface() to determine when the configuration has completed. Additional calls to tfConfigInterface() will return TM_EINPROGRESS as long as the BOOTP/DHCP server has not replied. tfConfigInterface() will return TM_ENOERROR if the BOOTP/DHCP server has replied and the configuration has completed.
- Asynchronous check: To avoid this synchronous poll, the user can provide a user call back function to tfUseDhcp() (respectively tfUseBootp()), that will be called upon completion of tfConfigInterface(). See tfUseDhcp() (respectively tfUseBootp()) for details.
|
OPCFW_CODE
|
I wrote this blog post on my own. I let ChatGPT create an “improved” version of this blog post. You can read the rewritten version here.
I’ve been trying to use LLMs(Large language models like GPT4) for productivity increases lately. One area of interest is writing blog posts. SEO spam is the area that everyone seems to be thinking of as the first to see wide commercial application of LLMs. So I thought I might give it a try and generate blog posts with ChatGPT while trying to not lower the bar too much.
I’ve been experimenting with the approach of drafting my ideas by talking into a mic and letting that be transcribed by the speech recognition model
A small python CLI wrapper helps me capture audio and then send it off to OpenAI.
The resulting transcription of notes is then summarized by GPT4.
One benefit of this is that I can speak to whisper in my native tongue which lowers the cognitive overhead marginally.
The outcomes so far are as good as could be expected. I see my notes well-represented in the output of ChatGPT. I don’t think a human who has nothing else to go by than the transcription could do a much better job at summarizing my notes. There are two problems with this approach though.
The lack of details
Despite me having been continuously stuck at writing a blog posts in the past1, it seems that now that the actual writing part is taken care of, I don’t actually have that much to say without putting the time in to do the research / thinking. While I have quite a lot of insights that are blog post worthy, each of these needs to be vetted and validated to be more than just a random thought.
The biggest learning for me really is that my problem with regularly writing blog posts is not so much the form as it is the substance.
The blandness of the result
Despite me providing a sample blog post to GPT, the resulting output feels too bland, too GPTesque. It reads too much like reading the output of a copywriter, which is not the style that I am looking for in my blog. This is true at multiple levels:
- It uses different words than I would do. For example it speaks of “complex commands” where I would have used “nontrivial commands”
- It makes claims that I wouldn’t make, e.g. mentions “whole new level of convenience and ease in handling various tasks” which even if it were true, I would’nt use that wording for a set of hacked together
- The structure of the output is too rigid. The generated blog post reads like it might come straight out of an SAP training handbook.
Now whether it is simply tough to emulate a writers style, a bias in the training data or a lack of fine-tuning I am still trying to figure out.
I will definitely use LLMs for writing my blog posts, be it as an editor that gives feedback or a summarizer of notes, or as a way to get a first draft that I can then critique/iterate the hell out of. But it probably will not lead to a much increased release cycle of high quality blog posts. Realizing that it is the substance of the blog posts that is preventing regular blog posts is really quite sobering.
None of this blog post(except where quoted) was written by ChatGPT.
I have more than sixty blog posts sitting in my drafts folder, yets its been more than a year since my last published post. ↩
|
OPCFW_CODE
|
I’ve been playing around with SCM systems (Software Configuration Management) earlier (played around with CVS, VSS and BitKeeper. Of these three CVS has in my opinion the best thing going for it (large, very large, adoption easily extendable, etc.). At that time (a few years back now) I already noticed Subversion, abbreviated to SVN. At that moment they had all the plans ready (architecture, etc.) and I think they were in their 0.1’s (version). Now I’m running a decently sized project on CVS I’m keeping my eyes open for alternatives, and SVN is one of the best candidates (BitKeeper looses it because of the licenses).
One of the best features of SVN (in my opinion) is the branching strategy they’ve taken. Although I already had experience with CVS, other team-members didn’t have that when we switched to CVS. Explaining all the stuff and the command line options is simple enough. But the one thing I’ve noticed that is the hardest to pick up for new users is branching. Of course, you’ll draw some pretty tree-like pictures and start explaining the stuff; everyone says he/she gets it. But when it is put to practice I see enough things going wrong regarding branches.
One of the biggest obstacles, IMHO, is the fact that branches are on another dimension in CVS. People forget to switch between branches and trunk because it’s not obvious on which one they are working. This is not the case in SVN, here a branch is just a full copy of the trunk, originating from a certain revision. You can do stuff like this with SVN:
\Project1 \trunk \src \doc \branches \feature1 \src \doc \jilles-playing-ground \src \doc
This is a huge improvement over CVS and it will be so much easier to explain this to people new to SCM/CVS/SVN. Perfect. Another improvement is very obvious: every change to the repository results in a new revision of the entire repository. This way directories and such can also be versioned.
Why am I still using CVS? Well, I’m still missing some features from SVN, but those will come in time. There is a nice cvs2svn script that I will check out sometime. But paraphrasing Joel Spolsky: a good tactic to convert people to your product is to make it easy to switch back. That’s why I’d like to see a svn2cvs script. Having such a script enables people to easy switch back to CVS once they decide that they like that better (unlikely) or they decide that SVN doesn’t have the features they want yet (more likely that the previous reason).
One other disadvantage of SVN: the tags. Under CVS a branch and a tag are two different beasts. Under SVN (as far as I see it/read it in the docs) they are both the same. Besides the directories “trunk” and “branches” you can create a directory “tags” and essentially create branches. So far so good, thanks to SVN’s shallow copying technique this is fast and doesn’t consume much disk space. My take on a tag is: “constant snapshot of the tree at a certain moment”. Note the word constant. Once I declare version 1.4 of my product and build, package and ship it I do not want the option/feature of changing the 1.4 tag in the repository and thereby creating a difference between the 1.4 version in the tree and the 1.4 version that is installed at the customer.
Of course, an administrator should be able to move that tag around in case of an erroneous tag command but this should not be made easy. But in subversion a tag is tag only because of the way the developer looks at it. I’d like a command in SVN that would enable me to “freeze” a branch: such that once I create a tag (read: branch) I can freeze it and thereby disallowing all developers from committing to that branch, ensuring that version X in the tree is the same version X that I’ll ship.
(Of course, I know that if you create a “tag” and some developer commits changes to it, you can easily back out of those changes, but that would just be mending your wounds instead of preventing the wound ever from taking place.)
All in all though, I like SVN (much better than CVS). I’m just biding my time till they reach a version more close to their 1.0 releaseShare
|
OPCFW_CODE
|
endangered reptiles gauteng south africa
Endangered Reptiles of Africa, The Africa section of this site lists the folllowing reptiles appearing on select endangered species lists: Reptiles that dwell in or migrate to any nation or region found on the African continent Reptiles dwelling in or that travel to the Mediterranean Sea and any islands located in the Mediterranean SeaList of birds of South Africa, South Africa is a large country, ranked 25th by size in the world, and is situated in the temperate latitudes and subtropicsDue to a range of climate types present, a patchwork of unique habitat types occur, which contribute to its biodiversity and level of endemismThis list incorporates the mainland and nearshore islands and waters only The submerged though ecologically important Agulhas ,Departments Services, To hunt live animals such as birds, reptiles and mammals, in accordance with Gauteng Nature Conservation Ordinance 12 of 1983, Section 16, 16A, 17, 18 and 49, Regulation 18 SERVICE AVAILABILITY View services >South Africa: Unregulated International Trade in Reptiles ,, A new report, Plundered - South Africa's cold-blooded international reptile trade, explores loopholes in the Convention on International Trade in Endangered Species (CITES) regulations and finds ,Ultimate Exotics | SA's Leading Online Reptile Store!, Subscribe to Ultimate Exotics Reptiles on YouTube! Take a look at our professional videos on the care and breeding of reptile speci Stay up-to-date with the latest information on the care and breeding of some amazing reptil Up-to-date videos on products, cage setups, health, hygiene and disease in reptiles!.
African Reptiles & Venom, We offer theoretical snake courses to the layman to teach them about snake awareness, how to identify snakes, the myths surrounding snakes, dangerous snakes found in Southern Africa, prevention of snakebite, First Aid for snakebite, as well as medical treatment of snakebite and recognition of allergic reactions to medical professionalsExotic Reptiles, This site is for the sale and information of cornsnakes in gauteng South Africa We are a small breeder, who started in 2012, but we hope to become one of the largest breeders in SA of cornsnak If you have any questions please feel free to contact usMammals of South Africa, Learning about the mammals of South Africa gives you an insight into the animals behaviour And learning about the mammals of South Africa is now so much easier for all South Africans - SouthAfricacoza is an excellent source of information in all 11 official languages of the countryExotic Reptiles for sale in South Africa | Pets to Adopt ,, Find Exotic Reptiles for sale in South Africa Buy or sell reptiles, snakes, lizards, amphibians and more Search Reptiles in Mzanzi on Public AdsProtected Trees List of South Africa, First of all South Africa has a lot of water problems and trees that are cut down take many many years to grow back to the size they were before they were cut down You can plant an exotic tree like a Black Wattle and it could grow to 10 metres in just a few years, an indigenous tree would take 30+ years (depending on species) to reach that ,.
The status of some rare and endangered endemic reptiles ,, Jan 01, 1989· Biological Conservation 49 (1989) 161 168 The Status of Some Rare and Endangered Endemic Reptiles and Amphibians of the Southwestern Cape Province, South Africa Ernst H W Baard Jonkershoek Nature Conservation Station, Private Bag 5014, 7600 Stellenbosch, Republic of South Africa (Received 20 May 1988; revised version received 24 December 1988; accepted 5 ,Top 20 extinct and endangered animals in South Africa with ,, List of the most endangered animals in South Africa In adherence to the words of Albert Einstein, human beings should extend their circle of compassion and embrace all living creatur As a part of embracing the creatures, ensuring that animals are not extinct is one of the waysSnakes of South Africa identification: How to recognize ,, It is estimated that the Southern part of Africa has about 170 snake species and sub-speci Less than half of these species are dangerous It is not common for people to die from snake bites, and South Africa records about 10 deaths annually Here are the top 14 snakes of South Africa ,This tiny venomous snake is South Africa's most endangered ,, Mar 17, 2017· This tiny venomous snake is South Africa's most endangered reptile By David Moscato March 17 2017 The Albany adder is a small, easy-to-miss species of viper, less than 30cm (just a foot) in ,Ecoregions | WWF, Siegfried, WR 1992 Conservation status of the South African endemic avifauna South African Journal of Wildlife Research 22: 61-64 Stuart, C and T Stuart1995 Field Guide to the Mammals of Southern Africa Struik, Cape Town White F 1983 The Vegetation of Africa, a descriptive memoir to accompany the UNIESCO/AETF AT/UNSO Vegetation Map ,.
South African Endangered Species, South Africa's Endangered Speci Endangered birds are those facing extinction, and South Africa is home to some of these very special species, which are always incredible to spot in their natural habitat Of course, many of them are in parks and game reserves, where their population numbers can be monitored and, to some extent, protected Some of the most exciting endangered ,Roan, Habitat They mostly inhabit lightly wooded savannah, open areas of medium sized grass, with easy access to surface water Where they are found The Roan is a rare and endangered antelope species, which has a patchy distribution in savannah ecosystems south of the Sahara DesertVolunteer with Reptiles | Snakes, Volunteer Testimonials | Reptile Project Photos | Download Reptile Volunteer Guide The Kinyonga Reptile Centre is a unique facility that is dedicated to the preservation and conservation of reptil As a volunteer you will become part of the team and play an active and rewarding role in helping them to achieve this goalGauteng, The wider Gauteng birding region is situated at between 1000 and 1800m above sea level Gauteng has a range of different biomes all within comfortable driving distance (15 hrs from the city centres) which enable one to pick up in excess of 200 species in a mornings birdingBlack girdled lizard, May 03, 2018· In South Africa however, all members of the genus Cordylus are protected by strict conservation laws mainly as a result of illegal exporting, and they may not be caught nor kept as a pet without a permit The black girdled lizard also has a small and restricted habitat, which makes it vulnerable to changes in land-use patterns in its natural range.
Reptiles for Sale from Breeders Worldwide, Largest online selection of captive bred Reptile Pets including Pythons, Boas, Colubrids, Lizards, as well as Amphibians and Invertebrat Discount prices from thousands of breeders on unusual Ball Pythons, Corn Snakes, Kingsnakes, Milk Snakes, Boa Constrictors, Reticulated Pythons, Western Hognose, Leopard Geckos and Crested Geckos Largest Reptile Classifieds site in the worldThe Common Harmless Snakes of South Africa, Jan 25, 2018· Following on the success of the Common Snakes of Durban post that was compiled here is a much broader spectrum to cover South Africa This insert below will touch on these medically insignificant snakes, which include harmless (non-venomous) as well as the common mildly venomous snakes which the average South African may come across on a day-to-day basisSmaug in trouble: The threatened 'dragon lizards' of South ,, Aug 21, 2017· No stranger to preserving South Africa's rare reptiles, EWT has also been pushing to have the venerable Smaug recognised as the country's ,Exotic Reptiles for sale in South Africa | Dog Breeders ,, Find Exotic Reptiles for sale in South Africa Buy or sell reptiles, lizards, amphibians and more Search Reptiles on Dog Breeders GalleryWildlife in South Africa | Everything to know | Discover ,, Wildlife in South Africa Several Big Five reserves protect the more charismatic large mammals associated with the African savannah Foremost among these is the Kruger National Park and abutting private reserves, but other key safari destinations include iSimangaliso Wetland Park, Hluhluwe-Imfolozi, Madikwe, Pilanesberg, Addo Elephant National Park and a variety of smaller and more exclusive ,.
Reptiles of South Africa, South Africa is home to the richest diversity of lizards, snakes, crocodiles, tortoises, chameleons, and turtles on the African continent SouthAfricacoza explains the life and times of the large variety of indigenous reptiles of the country in 11 official languages, providing descriptive information on their habitat, feeding and appearanceCBGES, Exotic Morph Reptile Breeders and Conservationists Tel: +27 72 901 4870 Email: [email protected] / cbgcozaSnakes Alive, Snakes Alive was the first e-commerce and information based reptile website in South Africa It was the idea of Doug Anderson (then a scholar) who with the help of his late father (a computer genius) managed to turn his hobby of breeding exotic reptiles into a fully fledged online retail business specialising in exotic reptilWildlife Trafficking and Poaching: South Africa, South Africa has nine provinces: Eastern Cape, Free State, Gauteng, the KwaZulu-Natal, Limpopo, Mpumalanga, Northern Cape, North West, and Western Cape A great deal of legislative and executive jurisdiction over issues of conservation and management of wildlife in the country, including regulation of imports and exports, is exercised by these ,10 endangered animals in South Africa and how you can help ,, Jun 21, 2007· South Africa is located in subtropical southern Africa, lying between 22°S and 35°SIt is bordered by Namibia, Botswana and Zimbabwe to the north, by Mozambique and Eswatini (Swaziland) to the northeast, by the Indian Ocean to the east and south, and the Atlantic Ocean to the west, the coastline extending for more than 2,500 km (1,600 mi) The interior of the country consists of a large ,.
Leave Your Needs
Dear friend, please fill in your message if you like to be contacted. Please note that you do not need to have a mail programme to use this function. ( The option to mark ' * ' is required )
Copyright © 2021.SBM All rights reserved. - Sitemap
|
OPCFW_CODE
|
How do I identify Model from ActionFilter and bind json data to that Model
I'm building CryptoActionFilter to decrypt api request and bind decrypted request to api models. I have different models for each api's, So I have to mention every actionname and their models to bind decrypted data to that model. Is there any other way where I don't have add actionname and model every time I add new api in my controller?
Here is my CryptoActionFilter code:
key = Convert.ToString(((RequestData)(context.ActionArguments["data"])).KEY);
data = Convert.ToString(((RequestData)(context.ActionArguments["data"])).DATAOBJ);
if (key != "" && data != "")
{
aesKey = RSA.RSADecrypt(key);
string jsonData = AES.Decrypt(aesKey, data);
if (actionName == "SignIn" || actionName == "NativeSignIn")
{
LoginRequest data = JsonConvert.DeserializeObject<LoginRequest>(jsonData);
context.ActionArguments["data"] = data;
base.OnActionExecuting(context);
}
else if (actionName == "GetMenu")
{
MenuRequest data = JsonConvert.DeserializeObject<MenuRequest>(jsonData);
context.ActionArguments["data"] = data;
base.OnActionExecuting(context);
}
}
I have 100+ api's like this, I'll have to mention everyone of them in my actionfilter, is there any sorted logic for this so that i don't need to touch my cryptoactionfilter everytime I create new api?
What's the type of your action parameters data?
Just slowly for me, so I understand your code: You are encrypting your payload (symmetrically), then passing the payload AND the key to your API? Is that correct? if so: What's the point? You can send it just unencrypted, if you already send the symmetric key in the payload since it adds exactly zero security over sending the payload unencrypted. Anyways, whats the point of encrypting it, just use SSL, that protects your payload already from MitM (assuming you don't get your ssl private keys stolen, which well... is the same as someone stealing your encryption key :P)
As for your solution: You know the controller and the action, you could use reflection to figure out the proper controller and action, with that you can read out its parameters. The first complex parameter is always serialized (by convention) from payload. Rest should be easy. However, non of this fixes the broken design of that API...
@itminus It can be any type, I want to find that type and bind the decrypted data using that type
@Tseng I'm encrypting my data with random AES key and then encrypting AES key with RSA public key from .pem file and then sending encrypted Key and Data to api side, after that in Crypto Filter I'm decrypting my Key with RSA private key from .pem file and decrypting data with that decypted key and binding it to specific model to hit the api. So my security is based on .pem secret key file, which will be on my server and no one can access it. now my question is I want to bind decryped json data to specific models for that api, I don't want to mention every api and model in my action filter
Why are you then deserializing it? Just decrypt the data and put it in context.ActionArguments["data"], and call base.OnActionExecuting(context) and let ASP.NET Core's model binder do the rest?? Or is the data already data bound on this point? if you, you're approach seems bad and you should do the decryption much earlier (maybe as custom model binder) or as a middleware (which reads the input data, and write it into a different stream which is passed to the middlewares after it (similar to how compression is done)
@Tseng Because decrypted data is in string format, I have to convert it to Json format to bind it to the model
@ShreyasPednekar The problem is not about the action name but the actual data type (MenuRequest/LoginRequest/...) that you want to cast to. You need make the Filter/ModelBinder to realize the real target type. I think it's better to use a IModelBinder instead of a Filter and change your action method to SignIn(LoginRequest data),GetMenu(MenuRequest data). In this way you'll get the real type dynamically without any hard code.
Yea the point is to do the decryption before the data binding happens. Maybe sending the key in a header, then you could do all the stuff and decrypt the payload as its streamed in a middleware. Then the middlewares after the decryption would get the decrypted payload in the request stream and you don't have to bother at all and its much more performant
|
STACK_EXCHANGE
|
Hyperfocus is an ultimate book on productivity and help us to focus better. The crux of the book is that we live most of our life in heedlessness or “auto-pilot” mode and we need to be mindful of our attention. Rarely do we stop and examine our thoughts and tasks in our day-to-day lives. In today’s world of attention economy, we have different social media and gaming apps which use novel methods to capture a scarce commodity called “Human attention”.
What I really like about the book is that it presents two distinct and important ideas.
There are plenty of distractions that keep us from doing purposeful and productive works. Our mind seeks novel distractions instead of finishing that important task which we need to finish by EOD.
So how do we “Hyperfocus” in a world of constant notifications and distractions?
- Firstly, acknowledge that our brain’s attention storage is limited and treat it as a scarce resource.
- The more complex the task, the more attention has to be paid.
- Avoid multi-tasking. Every time we multitask the area that can be used for focused attention becomes smaller.
- Eliminate distractions beforehand like switching off your mobile phone or disable notifications before you work on something important. This will help you to focus better on the task at hand.
- Pick the most consequential tasks for setting priorities for the day
- Don’t keep more than three things on the active list at any point in time. Keep your daily task list as simple as possible.
- Be mindful of your attention. Check every two hours whether you are in auto-pilot mode? Is your attention wandering? Are you working on a productive and meaningful task?
Scatterfocus is the opposite of Hyperfocus. Instead of focusing on a particular task, we let our mind wander and be unmindful. Is the book suggesting two contradictory ideas? Actually, no. Scatterfocus helps recharge our mind after prolonged period of work or hyperfoucs, generate new ideas and foster creativity.
Modes of scatterfocus:
- Capture mode: Take a notebook to capture & write down all the tasks and jobs that you have on your mind.
- Problem focus mode: Single out a problem that you are working on, write down all possible solutions in your pocket book and try going to sleep on the problem.
- Habitual mode: Take a habitual or repetitive task and perform it every now and then. You will be surprised to see how creative thoughts emerges from it.
Other tips and tricks:
- Emails: Check your emails no more than 3 times a day. If your job requires you to check emails often, then keep a time slot to check your emails. Try to articulate your response in 5 or less sentences.
- Meetings: Don’t attend meetings without an agenda. Try to question all recurring meetings if it,s really required. Try keep minimal audience for any meeting.
- Apps and Notifications: Keep all the social media and gaming apps in a different device. This might appear cumbersome on the surface but it will help you monitor your attention better , eliminate distractions and be mindful.
- Recharging: Take regular 10-15 mins break for every 60-90 mins like taking a walk or running near your work place, non-work related reading, going to the gym, listening to a podcast or audio-book, or having a conversation with friends.
Overall, this was an interesting read on how to focus and I hope these notes were beneficial to you. Do you have any interesting tips on how to focus better? If so, please do share.
|
OPCFW_CODE
|
Does the iPad Air stay cooler whilst charging than the previous iPad?
My iPad 4 gets quite hot whilst charging, even without a case. As I get this with every one of my iOS devices, as well as Macs, I always expect this from a device now.
However, the iPad Air is much thinner than the previous iPad, and consequently has this had any impact on the temperature of the iPad Air whilst charging?
Yes, the airs will be cooler than the 4th generation iPads when charging due to the laws of physics. How much cooler may be hard to estimate and simple to test empirically once we have shipping devices and a thermal imaging camera.
The specifications of the iPad Air show that the surface area of the back is 40,680 mm^2 (169.5 x 240 mm) and that it contains a 32.4 watt-hour battery.
The specifications of the 4th generation iPad show that the surface area of the back is 42,934 mm^2 (185.7 x 241.2 mm) and that it contains a 42.5 watt-hour battery.
So, to charge the Air you have far less energy (ratio of 1 : 0.76) going in to the device and only a slightly less (ration of 1 : 0.95) surface area with which to dissipate heat from the charging of the battery.
Since Apple almost always limits the charge rate of it's devices to get 80% of the full charge in 2 hours and the remaining 20% and top off in the next 2 hours we can safely expect that pattern to hold. Since the energy going into the older device isn't offset with enough extra surface area to dissipate that heat - the lesser energy going into the Air during an recharge cycle will let it's surface temperature run cooler to dissipate all the heat that the battery puts off while charging. I also see no reason to think there will be less or more airflow from speaker openings and such or that the glass of the Air will insulate more or less than the previous generation.
So I can confidently predict the iPad Air is going to be cooler than it's predecessor in more ways than one.
You assume that charging batteries is a linear process. Charging a 42 Wh battery does not necessarily results in a ratio of 1/0.7.
@MaxRied I don't assume they charge linearly - but the charge rate / voltage / time curves are similar in shape for just about all Apple batteries across devices and generations. I was hoping to show that the total heat generation is such that more buildup is highly unlikely. Do you have a more accurate model or different conclusion to offer?
I just wanted to add a grain of salt. Did it turn out to be a right prediction?
|
STACK_EXCHANGE
|
The two recent releases of Salesforce Marketing Cloud (Nov 2014 and Jan 2015) include several new platform features and enhancements that are specifically relevant to platform development and integration with Salesforce Marketing Cloud. These developer-related enhancements are explained in detail below.
In previous Journey Builder releases, a Contact could not be in the same version of an Interaction simultaneously. This is an issue for Interactions where you need a Contact to enter the same Interaction more than once, for example in abandoned cart or post-purchase programs.
In this new release, a Contact Entry Mode has been added to the Interaction Canvas that enables you to define whether a Contact can enter an Interaction once (across all Interaction versions), or enter the same Interaction multiple times.
If the option is set to 'Single Entry' and a Contact is already in a current version of the Interaction, then the Contact will not be allowed to enter the Interaction version again until they have exited the Interaction. However, if they are not currently in an Interaction version, they will be allowed to enter it (if they meet the Contact Filter Entry Criteria defined in the Interaction Trigger).
If the option is set to 'Multiple Entries', when an Event is fired a Contact can enter the current active version of an Interaction (if they meet the Contact Filter Entry Criteria defined in the Interaction Trigger), even if they are already moving through the Interaction.
Setting The Contact Entry Mode in Journey Builder Interaction Canvas
This mode can also be defined when creating an Interaction using the an Interaction method from the Fuel REST API by including one of the following name/value pairs in the WDF payload:
f you do not define an
entryMode when creating an Interaction, then the Interaction will use
SingleEntryAcrossAllVersions by default.
A new 'Date-Based' Trigger option is available in Interaction Triggers that enables date-based Attributes from the Contact model to determine an entry criteria for the Interaction.
Date-Based Trigger in an Interaction Trigger
A threshold defines the number of days, weeks or months and start time that the Contact should be admitted into the Interaction, before or after the selected Attribute date. When the Contact meets the criteria, the Contact will be admitted into the Interaction.
Use cases include birthday or anniversary programs, subscription dates, or the last time a Contact used a mobile app. When the value of a Contact Attribute changes, the algorithm re-evaluates the Contact for possible inclusion.
A re-entry criteria defines whether a Contact can re-enter the Interaction (either yearly, monthly or none).
A new status mode allows a Trigger to be changed to 'Test Mode' so an Interaction Trigger can be tested without admitting Contacts into an Interaction. This is helpful when testing if the Contact Filter Criteria defined in a Trigger has been configured correctly.
To set a Trigger to Test Mode in Journey Builder, select Triggers from the Administration menu and click on the Availability link related to the Trigger you want to test.
Only Triggers that are set to 'Unavailable' can be tested. If a Trigger is currently set to 'Available', select the Unavailable radio button and click Save, then open the Trigger Status dialog for the Trigger again by selecting Test Mode and click Save.
Setting a Trigger to Test Mode in Journey Builder Trigger Administration
A View Results link will then appear in the Trigger Performance column. Fire an Event (using Automation Studio or the contactEvents API method) for the Interaction Trigger to hear the Event, then click on the View Results link to preview the percentage of rejected Contacts (that did not meet the entry criteria) and the percentage of Contacts that would have been accepted into the Interaction, if the Trigger was available.
Join Activities reunite Contacts from two or more different branches back to a single branch. For example, if a Decision Split Activity is used to send two or more different messaging activities to Contacts, a Join Activity can be used to merge branches back together, so contacts continue to flow toward the same endpoint.
Note that in Workflow Document Format (WDF) there is no
Join type used in the Activity Object for Join Activities. Instead, WDF uses an Activity
Wait type to reunite Activities from separate branches back to a single branch.
In WDF, Wait Periods (represented as
Wait type Activities) are reunited into a single branch by using the same target
Wait type Activity key for their outcome. The flow diagram below illustrates this behavior, where boxes represent Activity Objects and arrows are outcomes from the Activity Objects.
A WDF Interaction Workflow with a Join Activity
Personalization strings are now supported in emails sent from Journey Builder (in previous releases, emails in a Journey Builder Send Email Activity could not be personalized). The data binding context used for personalization strings is the Event Source Data Extension defined in the Interaction Trigger.
To include an Event Source Data Extension field in an Email use the
%%fieldName%% personalization string syntax in the email subject line, preheader or body of the email.
You will need to ensure that an Profile Attribute exists for the Data Extension field before it can be used in an email. To create a new Attribute, select Profile Management from the Subscribers menu in the Marketing Cloud Email app and click the Create button. Add the name of the Data Extension field in the New Attribute Properties dialog and include an optional description, then click OK.
The Send Email Activity creates a Triggered Send in the Marketing Cloud Email app when the Interaction is published. This Triggered Send is used to deliver the email from the Send Email Activity. When an Interaction enters an unpublished state, the Triggered Send is set to inactive until the Activity is re-published.
Note that if the email template is updated while an Interaction is running, you need to publish the changes for the Triggered Email by selecting Triggered Emails from the Interactions menu in the Marketing Cloud Email app and expand the Journey Builder Sends tree to locate the email. Select the checkbox next to the email and click the Publish Changes button.
This feature is helpful to be aware of when troubleshooting an Interaction or monitoring the progress of an Interaction. You can view an activity log for Contacts that have entered an Interaction by selecting Contacts from the Administration menu in Journey Builder.
This page displays a complete transaction history of current and previously published Interactions. A search filter enables filtering by Contacts to display the Interactions they entered and exited, and the status of the Contact in Interactions. You can also filter by a column value, for example 'DidNotMeetEntryCriteria' to filter all Contacts that did not meet the Contact Filter Criteria defined in the Interaction Trigger.
While this new feature is not mentioned in the release notes and not yet supported in the Interaction Canvas interface, it's now available through the Fuel REST API and is worth understanding. Wait Periods in an Interaction can now define what time (and time zone) the Wait Period ends and the next Activity in a branch commences.
A typical use case for this feature is that if you want to start a Send Email Activity at a specific time of day (for example, 2pm when the open rate is higher), then you can define this in the WDF Wait Object.
An example JSON payload of a Wait Object in WDF with values for a
timeZoneId is provided below.
Supported time zone identifiers (used as
timeZoneId values) are provided in the Wait Format documentation.
What is interesting to note is that there's a new
waitForEventKey name/value pair option in a Wait Activity object — while this is currently reserved for future use, it does indicate the direction of Wait Activities. For example, in the future a Send Email Activity used in an 'Abandoned Cart' Interaction could be configured to Wait until a Contact completes a shopping cart transaction in an ecommerce platform (by firing an Event using the contactEvents API method).
Pictures can now be included in MobilePush Notifications for Android devices. This support adds impact to push notifications and can improve action rates.
To enable picture notifications in Android devices, create a new
et_big_pic custom key in Journey Builder for Apps SDK Explorer.
Once the custom key is added, you will be able to include a reference to an image when creating a new Outbound Message in MobilePush. From the MobilePush app, select the Create Message button, select the Outbound template, and the 'et_big_pic' Custom Key you created in the SDK Explorer will appear as an available custom key. Enter a shortened URL to the image.
The Android SDK will handle the message display and the image (which will be scaled as required) will appear in the message notification.
Image in a MobilePush Message
Journey Builder for Apps SDK Explorer available from Google Play Store enables developers to use the SDK without a requiring a Salesforce Marketing Cloud account. An iOS version of this app will be available from the Apple App Store in the coming months.
Sending a Message using Journey Builder for Apps SDK Explorer
This new feature enables an app to display button controls on a mobile device when it receives a push message from MobilePush. For example, a 'View Offer' button that links to a mobile-optimized landing page.
The category names of these Interactive Notifications are then included in the message payload. Refer to the Interactive Notifications developer documentation for implementing this functionality in iOS and Android applications.
Once implemented, Interactive Notifications are available in the outbound message template when creating a message from the MobilePush app. You can include up to four buttons specifying actions per message notification.
Extending the previous release, Interactive Notifications have been added to the Locations feature in MobilePush, enabling Location Exit or Entry messages to include Interactive Notifications based on their defined geofence area.
A Push Service Manager app accessible from the HubExchange menu in Marketing Cloud facilitates the management of Apple Push Notification (APNS) Certificates and Google API keys for MobilePush configurations. In turn, this provides a faster, more secure way to register apps with MobilePush.
Managing MobilePush configurations using Push Service Manager
These recent developer-centric enhancements to Salesforce Marketing Cloud provide further affirmation that Salesforce recognize the essential role that developers play in the integration and ultimately adoption of their platform. I have no doubt that we will see many other developer related enhancements in their aggressive release schedule for 2015.
Eliot Harper is Chief Technology Officer at Digital Logic, a Salesforce Marketing Cloud Partner based in Melbourne, Australia. Eliot specializes in Customer Journey Management and is author of the Journey Builder Developer's Guide http://jbdevelopers.guide. Follow @eliotharper on Twitter.
|
OPCFW_CODE
|
import { get } from 'idb-keyval'
import { GOKGS_URL } from '@config/webConfig'
import { DownsteamResponse, RequestTypes, UpstreamRequest } from '@type/fetch'
import { DownsteamMessage } from '@type/messages'
import { LoginRequest } from '@type/requests'
import { useCallback, useEffect, useState } from 'react'
import { SetStateFT } from '@type/utils'
export type DoRequest = <T extends UpstreamRequest = UpstreamRequest>(
msg: UpstreamRequest & T
) => Promise<void>
export type DoLogin = (username: string, password: string) => Promise<void>
export type UseAPIReturnT = [DoLogin, DoRequest]
export const useAPI = (
isLoggedIn: boolean,
setIsLoggedIn: SetStateFT<boolean>,
reducer: (msg: DownsteamMessage) => void
): UseAPIReturnT => {
const [doneLogin, setDoneLogin] = useState(false)
useEffect(() => {
if (doneLogin) {
getDownstram()
} else if (isLoggedIn) {
;(async () => {
const username = await get('user:login')
const password = await get('user:password')
if (username && password) doLogin(username, password)
})()
}
}, [doneLogin])
const getDownstram = useCallback(async () => {
try {
const res = await fetch(GOKGS_URL, {
mode: 'same-origin',
method: 'GET',
})
if (res.status === 200) {
const data = (await res.json()) as DownsteamResponse
if (data.messages)
data.messages.map((message) => {
reducer(message)
})
getDownstram()
} else {
setDoneLogin(false)
throw new Error('LOGOUT')
}
} catch (err) {
if (err.message === 'LOGOUT') setIsLoggedIn(false)
}
}, [])
const doUpstream = useCallback<DoRequest>(async (msg) => {
try {
const res = await fetch(GOKGS_URL, {
method: 'POST',
mode: 'same-origin',
credentials: 'include',
headers: {
'Content-Type': 'application/json;charset=UTF-8',
},
body: JSON.stringify(msg),
})
if (res.status == 200) {
if (msg.type === RequestTypes.login) {
getDownstram()
}
} else throw new Error(res.statusText)
} catch (err) {
console.error(err.message)
if (err.message === 'LOGOUT') throw err
}
}, [])
const doLogin = useCallback<DoLogin>(async (username, password) => {
try {
await doUpstream<LoginRequest>({
type: RequestTypes.login,
name: username,
password,
locale: 'en_US',
})
} catch (err) {}
}, [])
return [doLogin, doUpstream]
}
|
STACK_EDU
|
using DiiagramrAPI.Editor.Diagrams;
using DiiagramrModel;
using System.Collections.Generic;
using System.Linq;
namespace DiiagramrAPI.Editor
{
/// <summary>
/// Helper class to position terminals along the border of a node.
/// </summary>
public class TerminalPlacer
{
private readonly double _nodeHeight;
private readonly double _nodeWidth;
/// <summary>
/// Creates a new instance of <see cref="TerminalPlacer"/>.
/// </summary>
/// <param name="nodeHeight">The height of the node to place terminals on.</param>
/// <param name="nodeWidth">The width of the node to place terminals on.</param>
public TerminalPlacer(double nodeHeight, double nodeWidth)
{
_nodeHeight = nodeHeight;
_nodeWidth = nodeWidth;
}
/// <summary>
/// Arrange all given terminals around the node.
/// </summary>
/// <param name="terminals">The terminals to move.</param>
public void ArrangeTerminals(IEnumerable<Terminal> terminals)
{
ArrangeAllTerminalsOnEdge(Direction.North, terminals);
ArrangeAllTerminalsOnEdge(Direction.East, terminals);
ArrangeAllTerminalsOnEdge(Direction.South, terminals);
ArrangeAllTerminalsOnEdge(Direction.West, terminals);
}
private void PlaceTerminalOnEdge(Terminal terminal, Direction edge, double precentAlongEdge)
{
const int extraSpace = 7;
var widerWidth = _nodeWidth + (extraSpace * 2);
var tallerHeight = _nodeHeight + (extraSpace * 2);
switch (edge)
{
case Direction.North:
terminal.XRelativeToNode = (widerWidth * precentAlongEdge) - extraSpace + Diagram.NodeBorderWidth;
terminal.YRelativeToNode = Diagram.NodeBorderWidth;
break;
case Direction.East:
terminal.XRelativeToNode = _nodeWidth + Diagram.NodeBorderWidth;
terminal.YRelativeToNode = (tallerHeight * precentAlongEdge) - extraSpace + Diagram.NodeBorderWidth;
break;
case Direction.South:
terminal.XRelativeToNode = (widerWidth * precentAlongEdge) - extraSpace + Diagram.NodeBorderWidth;
terminal.YRelativeToNode = _nodeHeight + Diagram.NodeBorderWidth;
break;
case Direction.West:
terminal.XRelativeToNode = Diagram.NodeBorderWidth;
terminal.YRelativeToNode = (tallerHeight * precentAlongEdge) - extraSpace + Diagram.NodeBorderWidth;
break;
}
}
private void ArrangeAllTerminalsOnEdge(Direction edge, IEnumerable<Terminal> terminals)
{
var terminalsOnEdge = terminals.Where(t => t.Model.DefaultSide == edge).ToArray();
var increment = 1 / (terminalsOnEdge.Length + 1.0f);
for (var i = 0; i < terminalsOnEdge.Length; i++)
{
PlaceTerminalOnEdge(terminalsOnEdge[i], edge, increment * (i + 1.0f));
}
}
}
}
|
STACK_EDU
|
Given a "Could file sharing" service, is it feasible to provide non-secure (http) service for free, whereas if you need to store and transfer important data, you pay for secure service (https) a nominal ($5) monthly fee.
Would this be accepted by the users?
If you answer is "NO", please read my explanation:
A community-like initial response would be - no, you can't monetize of security, when it come to file sharing. But think about it - what are the chances that your traffic is getting sniffed on the way to ISP? Low. From ISP to the destination server data center - almost none. So when you are sending someone a photos from a party, you hardly need SSL. But if a designer is sending sketches of a web site, SSL is required when there are competitors, etc.
I believe security can be monetized, and that people will pay for excellent security. You will have to offer far more than just SSL (https) for uploads and downloads however. As just one example of what people might want consider audit-ability of who downloaded the file?
If all I want to do is transmit something securely, why wouldn't I simply encrypt it rather than transmitting it in the clear?
You can monetize anything you want, if you know how to sell it to the customer. In the 80's a computer company sold millions of copies of "RAM Doubling" software, that actually did nothing. Clever marketing let them get away with it until they were finally caught by the press.
If you can convince your customers that secure is better, you can sell it to them.
You do realize that bandwidth is not free, though? You are going to pay virtually the same amount of money for bandwidth for your free customers as for your "secure" customers. The "secure" customers will require more processor overhead, but not a lot.
So are your "secure" customers going to subsidize your free customers? Where and how will you make money?
The point of running a business is to make money, and you need to plan for that from the start. Simply saying that you think a competing service is making money isn't a plan for you. You need to plan for how you will make money in advance, or you will quickly run out of money and your business will fail.
I think it is one way to justify a charge, but may not be the deciding factor for a lot of users to convert. It may be a combination of: security, file space, speed, and the ability to share with other accounts. Some users pay just because they want to see a quality product flourish (I believe Evernote discovered this.).
There may be a market for health care professionals assuming you can meet their standards and keep an affordable rate. That will not be an easy task and will have issues from one country to the next.
|
OPCFW_CODE
|
As developers, we all know there are two ways of doing things: the manual, slow, annoying, complicated way, and the automated, fast, and even more complicated way.
I could, for instance, continue to write this article on why you should use Java rather than C++ for low latency systems. I could start training AI to write it for me. The latter approach would, eventually, save me a lot of time writing articles—it could generate thousands per second—but my editor is unlikely to be happy to hear that the first article is going to take me two years.
There is an analogous situation when it comes to developing low latency software systems. The received wisdom is that you would be crazy to use anything but C++ because anything else has too high a latency. But I’m here to convince you of the opposite, counter-intuitive, almost heretical notion: that when it comes to achieving low latency in software systems, Java is better.
In this article, I want to take a particular example of software for which low latency is prized; trading systems. However, the arguments I make here can be applied to almost any circumstance in which low latency is required or desired. It’s just that it’s easier to discuss this in relation to an area of development where I have experience. And the truth is that latency can be a tricky thing to measure.
It all comes down to your definition of “low latency.” Let me explain…
The received wisdom
Let’s start by going looking at the reasons why you should prefer C++ for building high speed, low latency systems.
Since C++ is far closer to the metal, most developers will tell you, there is an inherent speed advantage to coding in the language. In low-latency situations, such as high speed trading, where microseconds can make the difference between a viable piece of software and an obsolete waste of disk space, C++ is regarded as the gold standard.
Or at least it was, once upon a time. The reality is that, nowadays, plenty of large banks and brokers use systems that are written in Java. And I mean written in Java—not written in Java and then interpreted into C++ in pursuit of lower latency. These systems are becoming standard, even for Tier 1 investment banks, despite the fact that they are (supposedly) slower.
So what’s going on?
Well, C++ might be “low latency” when it comes to executing code, but it’s definitely not low latency when it comes to rolling out new features or even finding devs who can write it.
The (real) differences between Java and C++
This issue of development time is, however, just the beginning when it comes to the real differences between Java and C++ in real-world systems. So, in order to understand the true value of each language in this context, let’s unpack these a little.
First, it’s important to remember the actual reason why C++ is faster than Java in most situations: a C++ pointer is the address of a variable in memory. That means that software can directly access individual variables and doesn’t need to run through computationally expensive tables to find them. Or at least it can if it is told where they are—because with C++, you will often have to explicitly manage the lifetime and ownership of objects.
The upshot of this is that unless you are really, really good at writing it (a skill which can take decades to master), C++ will require hours (or weeks) of debugging. And, as anyone who has tried to debug a Monte Carlo engine or PDE solver will tell you, trying to debug memory access at a fundamental level can be extremely time consuming. One broken pointer alone can easily crash an entire system, so shipping a new version written in C++ can be truly terrifying.
This is not the whole story, of course. People who enjoy coding in C++ (all three of them) will point out that the garbage collector (GC) in Java suffers from nonlinear latency spikes. This is particularly the case when working with legacy systems, and so shipping updates to Java code, while not breaking your clients’ systems, might make them so slow as to be unusable.
In response, I would point out that a lot of work has been done to reduce the latency generated by the Java GC in the last decade. LMAX Disruptor, for instance, is a low latency trading platform written in Java but also built as a framework which has "mechanical sympathy" for the hardware it’s running on, and that’s lock-free.
Issues can be further mitigated if you are building a system that uses a continuous integration and delivery (CI/CD) process, because CI/CD allows for the automated deployment of tested code changes. This is because CI/CD enables an iterative approach to improving GC latencies, in which Java can be progressively improved and tailored to specific hardware environments, without the resource-intensive process of preparing code for different hardware specifications in advance of shipping it.
Since IDE support for Java is much more advanced than for C++, most environments (Eclipse, IntelliJ, IDEA) will be able to refactor Java. This means that most IDEs will allow you to optimize code to run with low latency, a capability that is still limited when working with C++.
Even if it doesn’t quite match C++ in raw performance, most developers will be able to reach an acceptable performance in Java much more easily than they will in C++. The real latency killer comes between having an idea and shipping the code for it.
What do we mean by faster?
In fact, there is good reason to question the idea that C++ is genuinely “faster” or has a “lower latency” than Java at all. I’m aware, at this point, that I’m getting into some pretty murky waters, and that plenty of developers may start to question my sanity. But hear me out.
First, there’s the (slightly absurd) point that if you have two developers, one writing in C++ and one in Java, and you ask them to write an platform for high-speed trading from scratch, the Java developer is going to be trading long before the C++ developer. For developers who haven’t use both languages, here’s why: Java has far fewer instances of undefined behavior than C++. To take just one example, indexing outside the bounds of an array is an error in both Java and C++. If you accidentally do this in C++, you might segfault, or (more commonly) you’ll just get back some random number that won’t mean anything, even to experienced developers. In Java, indexing out of bounds always throws an ArrayIndexOutOfBoundsException. This means that debugging is significantly easier in Java, because mistakes tend to throw errors immediately, and the location of the bug is easier to trace.
In addition, and at least in my experience, Java (in most environments) is simply better at recognizing which pieces of code do not need to be run, and which are critical for your software to function. You can, of course, spend days tuning your C++ code so that it contains absolutely no extraneous code, but in the real-world every piece of software contains some bloat, and Java is better at recognizing it automatically.
This means that, in the real world, Java is often faster than C++, even on standard measures of latency. And even where it is not, the difference in latency between the languages is often swamped by other factors, or is nowhere near large enough to make a difference, even in high-frequency trading. Much has been made, for instance, of the reduced latency of 5G networks—down to 1ms, according to some analysts—but in low-latency programming this still represents a significant performance cost.
The advantages of Java for low latency systems
All of these factors, to my mind, build into a pretty unassailable case for using Java to write high-speed trading platforms (and, indeed, low-latency systems in general, more on that shortly).
However, just to sway the C++ enthusiasts a little more, let’s run through a number of additional reasons for using Java:
- First, and as we’ve already seen above, any excess latency that Java introduces into your software is likely to be much smaller than existing latency sinks, such as network communication delays, in (at least) one of the systems that trades must go through before being completed. This means that any (well written) Java code can easily perform as well as C++ in most trading situations.
- The shorter development time of Java also means that, in the real world, software written in Java can be more quickly adapted to changing hardware (or even novel trading strategies) than C++.
- Take this insight even further, and you’ll see that even optimizing Java software can be quicker—if looked at across an entire piece of software—than the equivalent task in C++. As Peter Lawrey, a Java consultant interested in low latency and high throughout systems, told InfoQ recently, “if your application spends 90% of the time in 10% of your code, Java makes optimising that 10% harder, but writing and maintaining 90% of your code easier; especially for teams of mixed ability.”
In other words, it’s possible to write Java, from the machine level on up, for low latency. You just need to write it like C++, with memory management in mind at each stage of development. The advantage of not writing in C++ itself is that debugging, agile development, and adaptation to multiple environments is simply easier and quicker in Java.
If you’ve got this far, and you’re not developing low-latency trading systems, you’re likely to be wondering if any of the above applies to you. The answer—with a very few exceptions—is yes.
The debate about how to achieve low latency is not a new one, and it is not unique to the world of finance. For this reason, it’s possible to learn valuable lessons about other situations from it. In particular, the argument above—that Java is “better” because it is more flexible, more resilient, and ultimately faster to develop and maintain—can be applied to many areas of software development.
The reasons why I (personally) prefer to write low latency systems in Java are the same as those that have made the language such a success over the last 25 years. Java is easy to write, compile, debug, and learn, and this means you can spend less time writing code and more time optimizing it for latency. Ultimately, in the real world this results in reliably faster trading systems. And, for high-speed trading, that’s all that counts.
|
OPCFW_CODE
|
Can we remove features that have zero-correlation with the target/label?
So I draw a pairplot/heatmap from the feature correlations of a dataset and see a set of features that bears Zero-correlations both with:
every other feature and
also with the target/label
.Reference code snippet in python is below:
corr = df.corr()
sns.heatmap(corr) # Visually see how each feature is correlate with other (incl. the target)
Can I drop these features to improve the accuracy of my classification problem?
Can I drop these features to improve the accuracy of my classification problem, if it is explicitly given that these features are derived features?
Can I drop these features to improve the accuracy of my classification problem?
If you are using a simple linear classifier, such as logistic regression then yes. That is because your plots are giving you a direct visualisation of how the model could make use of the data.
As soon as you start to use a non-linear classifier, that can combine features inside the learning model, then it is not so straightforward. Your plots cannot exclude a complex relationship that such a model might be able to exploit. Generally the only way to proceed is to train and test the model (using some form of cross-validation) with and without the feature.
A plot might visually show a strong non-linear relationship with zero linear correlation - e.g. a complete bell curve of feature versus target would have close to zero linear correlation, but suggest that something interesting is going on that would be useful in a predictive model. If you see plots like this, you can either try to turn them into linear relationships with some feature engineering, or you can treat it as evidence that you should use a non-linear model.
In general, this advice applies whether or not the features are derived features. For a linear model, a derived feature which is completely uncorrelated with the target is still not useful. A derived feature may or may not be easier for a non-linear model to learn from, you cannot easily tell from a plot designed to help you find linear relationships.
That clears the air. Thanks. I've also added a follow-up question. Do you mind answering it as well? Thanks in advance.
@karthiks: You should not really edit/extend your question after it has been answered. In this case it is only a small extension and the answer is basically "yes, it is the same", so I have edited that in.
I second that and added another one, only because it is more an extension question that I missed out.
If I understand you well, you are asking if you can remove features having zero-correlation either :
With other features
With the label you want to predict
Those are two different cases :
1. We usually recommend to remove features having correlation between them (stabilize the model). If they are ZERO-correlated, you cannot conclude here. This is by training your model that you will see is the feature is worth or not.
Don't drop those ones.
2. If a feature is strongly correlated with your label, this means a linear function (or model) should be able to predict well the latter. Even if it is not correlated, it doesn't tell you that a non-linear model wouldn't perform well by using this feature.
Don't drop this one either !
I hope I answered your question.
Modified question for pressing clarity. I meant that a set of features bearing ZERO-correlation with all other features including the target/label. Hope that clarifies..
Thank you for your clarification. Edited my answer accordingly.
These uncorrelated features might be important for target in connection with other non-target features. So, it might be not a good idea to remove them, especially if your model is a complex one.
It might be a good idea to remove one of the highly correlated between themselves non-target features, because they might be redundant.
Still, it might be better to use feature reduction technics like PCA, because PCA maximize variance, without removing the whole feature, but including it into principal component.
In case of ordinals or binary features, correlation won't tell you a lot. So I guess, the best way to test if a feature is important in case it's not correlated with target is to directly compare performance of a model with and without the feature. But still different features might have different importance for different algorithms.
|
STACK_EXCHANGE
|
Cannot find key of appropriate type to decrypt AP REP - AES256 CTS mode with HMAC SHA1-96
I had a tomcat server with Spnego SSO setting, it works well with no issues.
Now I want to add an Apache server in front of it to enable SSL. The Apache server use AJP to communicate with it:
<VirtualHost *:58443>
SSLEngine on
ServerName ca09417d.global.local:58443
SSLCertificateFile "${SRVROOT}/conf/ssl/ca09417d.server.cer"
SSLCertificateKeyFile "${SRVROOT}/conf/ssl/ca09417d.server.key"
...
ProxyRequests off
ProxyPreserveHost On
ProxyPass /vcaps3 ajp://cavcdbdev02:58009/vcaps3
ProxyPassReverse /vcaps3 ajp://cavcdbdev02:58009/vcaps3
</virtualhost>
After that, the server complains this error:
KrbException: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS mode with HMAC SHA1-96
sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:278)
sun.security.krb5.KrbApReq.<init>(KrbApReq.java:149)
sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:108)
sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:829)
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
sun.security.jgss.spnego.SpNegoContext.GSS_acceptSecContext(SpNegoContext.java:906)
sun.security.jgss.spnego.SpNegoContext.acceptSecContext(SpNegoContext.java:556)
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
net.sourceforge.spnego.SpnegoAuthenticator.doSpnegoAuth(SpnegoAuthenticator.java:444)
net.sourceforge.spnego.SpnegoAuthenticator.authenticate(SpnegoAuthenticator.java:283)
net.sourceforge.spnego.SpnegoHttpFilter.doFilter(SpnegoHttpFilter.java:229)
So I tried those things:
To make sure my JDK can do AES 256
Login on the local of Apache, it succeed because I have this setting: spnego.allow.localhost =true
Check the both server log and found nothing
I still can use SSO on tomcat server directly after Apache failed
Now I have no idea what I should do to fix it.
My tomcat version is 8.5.32
My JDK version is 1.8.0_151
My Apache version is httpd-2.4.33-o110h-x86-vc14-r2
My Spnego version is 7
This is the main part of my krb5.conf:
[libdefaults]
default_tkt_enctypes = rc4-hmac aes256-cts aes128-cts
default_tgs_enctypes = rc4-hmac aes256-cts aes128-cts
permitted_enctypes = rc4-hmac aes256-cts aes128-cts
Could you help me?
Thanks very much!
Justin
Solution 1:
I had a similar error because the keytab file was generated with the wrong /crypto configuration.
Cannot find key of appropriate type to decrypt AP-REQ - RC4 with HMAC)
Generate a new keytab file using /crypto ALL with the ktpass command:
ktpass /out "server.keytab" /crypto ALL /princ HTTP/server@REALM /mapuser KERBEROS_SERVICEUSER /pass PASSWORD /ptype KRB5_NT_PRINCIPAL
Replace HTTP/server@REALM, KERBEROS_SERVICEUSER and PASSWORD with according values.
Solution 2:
Make sure the Kerberos-Service user has the following options checked:
this account supports kerberos AES 128 bit encryption
this account supports kerberos AES 256 bit encryption
|
STACK_EXCHANGE
|
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
from router import Router
from template import template_layout
import plots
import random
from urllib.parse import urlencode
app = dash.Dash(
external_stylesheets=[dbc.themes.FLATLY], suppress_callback_exceptions=True
)
router = Router()
router.register_callbacks(app)
data = {
"Gaussian": [random.normalvariate(0, 1) for i in range(2000)],
"Lognormal": [random.lognormvariate(0, 1) for i in range(2000)],
"Uniform": [random.uniform(-1, 1) for i in range(2000)],
}
single_select = dcc.Dropdown(
id="single-select-dropdown",
options=[{"label": k, "value": k} for k in data],
placeholder="select univariate data",
)
multi_select = dcc.Dropdown(
id="multi-select-dropdown",
options=[{"label": k, "value": k} for k in data],
multi=True,
)
multi_select_submit = html.Button(
id="multi-select-submit", children="Submit", className="btn btn-primary"
)
@app.callback(
Output("url", "pathname"),
[
Input("single-select-dropdown", "value"),
Input("multi-select-submit", "n_clicks"),
],
State("multi-select-dropdown", "value"),
)
def load_page(single_value, n_clicks, multi_values):
trigger = dash.callback_context.triggered
if len(trigger) > 1:
return "/"
elif trigger[0]["prop_id"] == "single-select-dropdown.value":
return f"/univariate/{single_value}"
elif trigger[0]["prop_id"] == "multi-select-submit.n_clicks":
query_str = urlencode({k: k for k in multi_values})
return f"/multivariate?{query_str}"
@router.route("/")
def index():
return template_layout(
dbc.Container(
[
dbc.Row(
dbc.Col(
[
dcc.Markdown(
"""# Dash-Demo: A simple multipage Dash app"""
),
dcc.Markdown(
"""This basic app demonstrates a handful of Dash features described in the blog post linked below."""
),
dcc.Markdown(
"[geostats.dev](https://geostats.dev/python/plotly/dash/flask/dash%20bootstrap%20components/2020/11/26/dash-post.html)"
),
]
)
),
dbc.Container(
[
dbc.Row(
[
dbc.Col(
html.Label("Select Single Distribution: "),
width=3,
),
dbc.Col(single_select, width=4),
],
),
dbc.Row(dbc.Col(html.P("OR", className="display-4"))),
dbc.Row(
[
dbc.Col(
html.Label("Select Multiple Distributions: "),
width=3,
),
dbc.Col(multi_select, width=4),
dbc.Col(multi_select_submit, width=2),
]
),
],
className="m-5 p-5 shadow border mx-auto bg-light",
),
]
)
)
@router.route("/univariate/<distribution_name>")
def univariate_stats(distribution_name):
return template_layout(
dcc.Graph(
figure=plots.histogram(distribution_name=data[distribution_name]),
style={"height": "80vh"},
config={
"displaylogo": False,
},
)
)
@router.route("/multivariate")
def multivariate_stats(**kwargs):
data_to_plot = {v: data[v] for k, v in kwargs.items()}
return template_layout(
dcc.Graph(
figure=plots.histogram(**data_to_plot),
style={"height": "80vh"},
config={
"displaylogo": False,
},
)
)
if __name__ == "__main__":
app.run_server(debug=True)
|
STACK_EDU
|
Originate call to sip trunk via asterisk manager api java
So I am a total newbie in asterisk and managing call lines in general but I managed to install Asterisk Now 13 distro, I have connected 2 sip phones with pjsip and configured a sip trunk which works when I dial an external number with the corresponding prefix. Now I have to programmaticly originate calls and connect them to local extensions which I have no idea how to achieve and I cant seem to find much information about it on the internet after hours of searching.
I managed to connect 2 local sip phones with the asterisk manager api and OriginateAction in the following way:
originateAction = new OriginateAction();
originateAction.setChannel(ConnectionType+"/"+extCaller);
originateAction.setContext(context);
originateAction.setCallerId(idCaller);
originateAction.setExten(tDestination);
originateAction.setPriority(priority);
originateAction.setTimeout(timeoutCall);
managerConnection.login();
originateResponse = managerConnection.sendAction(originateAction, timeoutRequest);
I also tried this channel originate pjsip/201 extension number@from-ptsn and channel originate local/201@from-local extension number@trunkName .
The context of the PJSIP trunk is from-pstn,I tried using that in various ways without luck both in asterisk cli and the application.
How do I make it use the PJSIP trunk when originating the call and make a call out of the office?
EDIT: I originated an outgoing call using a number that completes with the trunk outgoing route requisites and the "from-internal" context like this:
channel originate Local/201@from-internal extension (prefix)numberToCall@from-internal
I still do not understand why this works and if it is the correct answer to my question.
So the answer is in the edit of the question. The only way to generate an outgoing call that I could find is to originate that call "internaly" (with the context "from-internal" which happens to be the same context that is used when originating internal calls) introducing a target number value that completes with the sip trunk's route pattern requirements.
Example:
I have a route configured for the sip trunk( trunk1 ) with a pattern(RegEx): [0]{1}/number/ that means that with a 0 infront of any nubmer it will be a valid value for that route and it will try to call using trunk1.
In the case of AsteriskNow CentOS installation it happens to be with the context "from-internal". Since the asterisk configuration files are owned by the FreePBX it is recomended to use the FreePBX GUI instead of configuring the .conf files of asterisk manualy.
That concludes to :
channel originate Local/201@from-internal extension (0)[numberToCall]@from-internal
Which will make the extension 201 ring first and when picked up it will try to use the sip trunk to dial that [numberToCall] because the route with the 0 is "called".
In order to send that command to asterisk using asterisk-java I wrote the following code:
ManagerConnectionFactory factory = new
ManagerConnectionFactory("serverIp", "username",
"passwd");
ManagerConnection managerConnection=factory.createManagerConnection()
OriginateAction originateAction=new OriginateAction();
final String randomUUID=java.util.UUID.randomUUID().toString();
System.out.println("ID random:_"+randomUUID);
originateAction.setChannel([connectionType]+"/"+[callerExtension]);<-- SIP or PJSIP / 201(the phone that will ring first)
originateAction.setContext("from-internal"); <-- Default FreePBX context
originateAction.setCallerId([callerId]); // what will be showed on the phone screen (in most cases your phone)
originateAction.setExten([targetExten]); //where to call.. the target extension... internal extension or the outgoing number.. the 0[nomberToCall]
originateAction.setPriority([priority]);// priority of the call
originateAction.setTimeout(timeoutCall); // the time that a pickup event will be waited for
originateAction.setVariable("UUID", randomUUID); // asigning a unique ID in order to be able to hangup the call.
|
STACK_EXCHANGE
|
Schedule for Completing Discrete Structure Warmups
Overview of the calendar
June 2021 July 2021 August 2021
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 1 2 3 1 2 3 4 5 6 7
6 7 8 9 10 11 12 4 5 6 7 8 9 10 8 9 10 11 12 13 14
13 14 15 16 17 18 19 11 12 13 14 15 16 17 15 16 17 18 19 20 21
20 21 22 23 24 25 26 18 19 20 21 22 23 24 22 23 24 25 26 27 28
27 28 29 30 25 26 27 28 29 30 31 29 30 31
Tentative schedule for discrete structures warmups
June 30, 2021: Creating and using numerical functions like abs, sqrt, and factorial
Jul 9, 2021: Summarizing data with statistical functions like mean, median, and mode
Jul 9, 2021: Calculating dispersion with range, variance, and stdev
Jul 16, 2021: Creating and using lambda expressions and lambda functions; how to pick?
Jul 16, 2021: Finding data with list comprehensions, for loops, and while loops
Jul 23, 2021: Storing data in and extracting data from (parsing) CSV files
Jul 23, 2021: Storing data in and extracting data from (parsing) JSON files
Jul 30, 2021: Using map, reduce, and filter functions
Jul 30, 2021: Generator and non-generator functions and the yield statement
Aug 6, 2021: Creating and using set, frozenset, and multisets (i.e., bag or collections.Counter)
Aug 6, 2021: Creating and using list and tuple; mutability and immutability
Aug 13, 2021: Creating, populating, and accessing dict and defaultdict
Aug 13, 2021: Using dict for performance improvement with memoization
Aug 20, 2021: Creating, populating, and accessing dict and defaultdict
Aug 20, 2021: Using dict for performance improvement with memoization
Additional notes
May want to organize the content so that material about CSV and JSON files
comes before lambda expressions and lambda functions, list comprehensions.
Content about type annotations should probably come early in the content.
Hello @mariakimheinert I wanted to let you know that I will probably have to miss deadlines on this schedule for the remainder of this week and the majority of next week because I have to finish the implementation for another tool, prepare a talk for a conference, and then record a video for that conference. I have cleared by desk from other tasks, however, and hope to return to working on this project as soon as I finish the aforementioned tasks.
Sounds good, @gkapfham! Thanks for keeping me in the loop :)
|
GITHUB_ARCHIVE
|
How do you divide in VBScript?
Division Operator (/) (VBScript) Divides two numbers and returns a floating-point result.
How do you divide in VBA?
Division. The symbol to use when you want to divide numbers is the forward slash (/).
What is mod in VBS?
The modulus, or remainder, operator divides number1 by number2 (rounding floating-point numbers to integers) and returns only the remainder as result.
How do you get a remainder in Visual Basic?
Mod in VB.NET is the Modulo operation. It returns the remainder when one number is divided by another. For example, if you divided 4 by 2, your mod result would be 0 (no remainder). If you divided 5 by 2, your mod result would be 1.
How do you do an addition in VBScript?
The underlying type of the expressions specifies the behavior of the + operator in the following way: – if both expressions are numeric, then addition. – if both expressions are strings, then Concatenate. – if one expression is numeric and the other is a string, then Add.
How do I do math in VBA?
In the table below you see some of the mathematical operations such as addition, subtraction, multiplication, division, exponentiation….Mathematical Operations in VBA.
|Operator||Description||Example A=2, B=10|
|*||Multiply both operands||A * B will give 20|
|/||Divide numerator by denumerator||B / A will give 5|
|^||Exponentiation||B ^ A will give 100|
How do you do calculations in VBA?
Press ALT + F8 shortcut key for opening Macro window & then select the macro. Alternatively, you can press F5 to run the code in VBA screen. In this way, we can make calculations according to the requirement of the user.
What is mod in VBA?
The Mod operator in Excel VBA gives the remainder of a division.
Is there a mod function in VBA?
VBA Mod is not a function, in fact, it is an operation which is used for calculating the remainder digit by dividing a number with divisor. In simple words, it gives us the remainder value, which is the remained left part of Number which could not get divided completely.
How do I apply a mod formula in VBA?
The Mod operator in Excel VBA gives the remainder of a division. Explanation: 7 divided by 2 equals 3 with a remainder of 1. Explanation: 8 divided by 2 equals 4 with a remainder of 0. For a practical example of the mod operator, see our example program Prime Number Checker.
Which is the integer division operator in VBScript?
Integer division () – operator of language VBScript. Description: Used to divide two numbers (return an integer result). Syntax: Note: Before division is performed, numeric expressions are rounded to Byte, Integer or Long data type expressions. It is valid that: 32 equals 1, -32 equals -1.
How does the \\ operator work in Visual Basic?
The \\ Operator (Visual Basic) returns the integer quotient, which drops the remainder. The data type of the result depends on the types of the operands. The following table shows how the data type of the result is determined. Before division is performed, any integral numeric expressions are widened to Double.
What is the quotient of integer division in Visual Basic?
Arithmetic Operations. Integer division returns the quotient, that is, the integer that represents the number of times the divisor can divide into the dividend without consideration of any remainder. Both the divisor and the dividend must be integral types ( SByte, Byte, Short, UShort, Integer, UInteger, Long, and ULong) for this operator.
|
OPCFW_CODE
|
fx-items errors when additional textnodes are present for label
Example:
/demo/fx-control.html
<fx-control ref="selected" update-event="input">
<fx-items ref="instance('list')?*" class="widget">
<template>
<span class="fx-checkbox">
<input id="check" name="option" type="checkbox" value="{value}"/>
--> <label>foo {name}</label>
</span>
</template>
</fx-items>
</fx-control>
The additional 'foo' textnode will cause an error when evaluating the expression. Also lets the code run into a syntax error and throws browser out-of-state - need to completely reload in a private window to make Chrome behave again.
Hi,
Could this also be the cause of an issue I’m having with radio button controls when placed in groups on the same page?
I’m trying to implement a multiple choice questionnaire form with single selection choice questions. I’m not able to put a variable in the name attribute if the input tag in order to provide a unique name for a group of controls.
Hi Jake, could you provide an example page that shows your problem? Thanks
Jake Bourne @.***> schrieb am Sa., 14. Sept. 2024,
10:03:
Hi,
Could this also be the cause of an issue I’m having with radio button
controls when placed in groups on the same page?
I’m trying to implement a multiple choice questionnaire form with single
selection choice questions. I’m not able to put a variable in the name
attribute if the input tag in order to provide a unique name for a group of
controls.
—
Reply to this email directly, view it on GitHub
https://github.com/Jinntec/Fore/issues/246#issuecomment-2350902460, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AADWFW5S6IYAGTM7ZSX7SPLZWPUVFAVCNFSM6AAAAABDNJAMZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJQHEYDENBWGA
.
You are receiving this because you authored the thread.Message ID:
@.***>
Hi Joern,
Absolutely - here is the page which show the problem with radio button groups.
I have replaced the 'data' instance with some example inline data.
Thanks,
Jake.
<fx-fore>
<fx-model>
<fx-instance>
<data>
<answers>
<answer qno="1"></answer>
<answer qno="2"></answer>
<answer qno="3"></answer>
<answer qno="4"></answer>
<answer qno="5"></answer>
</answers>
</data>
</fx-instance>
<fx-instance id="quiz-data">
<quiz id="quiz-1">
<questions>
<question qno="1">
<text>Question 1.)</text>
<options>
<item id="A">Option A</item>
<item id="B">Option B</item>
<item id="C">Option C</item>
<item id="D">Option D</item>
</options>
<answer>B</answer>
</question>
<question qno="2">
<text>Question 2.)</text>
<options>
<item id="A">Option A</item>
<item id="B">Option B</item>
<item id="C">Option C</item>
<item id="D">Option D</item>
</options>
<answer>B</answer>
</question>
</questions>
</quiz>
</fx-instance>
<fx-instance id="response">
<data></data>
</fx-instance>
<fx-submission id="quiz_submission" url="#echo" method="post" replace="instance" instance="response">
</fx-submission>
</fx-model>
<h1>Quiz</h1>
<fx-repeat ref="//answer">
<template>
<fx-var name="qno" value="@qno"></fx-var>
<fx-var name="qtext" value="instance('quiz-data')//question[@qno=$qno]//text"></fx-var>
<h2>Question {$qno}</h2>
<fx-control ref="." update-event="change">
<label>{$qtext}</label>
<div>
<fx-items ref="instance('quiz-data')//question[@qno=$qno]//item" class="widget">
<template>
<span class="fx-radio">
<input id="radio" name="option" type="radio" value="{@id}">
<label>{.}</label>
</span>
</template>
</fx-items>
</div>
</fx-control>
</template>
</fx-repeat>
<fx-trigger>
<button>Send</button>
<fx-send submission="quiz_submission"></fx-send>
</fx-trigger>
</fx-fore>
@coloneltravis investigated a bit on it - surprisingly leaving the @name away completely gives better results. At least now the value is being set correctly for the respective answer. However this reveals another issue - you need to click at least twice to select a radio. This has to do with repeat index setting i guess - the index should be set immetiately.
However the way you tried should actually work but isn't. We'll dive a bit deeper tomorrow to fix this.
@JoernT yes removing @name allows each radio button to be toggled as though they were checkboxes in a multiple-select list. However, typical radio list behaviour should be single-select.
Thank you for your time looking at this issue and hope it helps the project.
Hey,
We looked into this, there are multiple issues going on:
A click in a repeat requires updates: the index function might be used somewhere, so template expressions (and all XPath for that matter) is invalidated and should be reran.
This causes a rerender of the fx-items, even of the inputs. None of the existing items seem to be reused. This can be improved.
The inputs rerender, but the template expressions in it are not correctly updated. This can be fixed in Fore: just do the same as in a repeat: after the refresh is done, scan for new template expressions
These inputs do not have a name set yet: it is still in a template: the name attribute is still {$qno} when the 'checked' is set. Which causes all other radiobuttons with the same name to be cleared. Since they all have the same name, all is cleared.
the name cannot directly be updated because the input is not in the DOM yet.
Even with the template expressions updated after a refresh and setting checked in a timeout (after the name attribute is up to date) the main issue is the full refresh when a repeat is clicked.
I'm going to try to minimize that update: maybe make a property on all Fore elements of which XPaths they use. For a click in a repeat we know we can ignore all refreshing of all Fore elements that do nothing with the index function. This is a great performance improvement and prevents these kinds of bugs.
Thanks so much for bringing this up!
|
GITHUB_ARCHIVE
|
Alternative Cisco VPN clients for Windows XP
I'm considering investing in a Cisco ASA5505. As Cisco's own VPN client requires a service subscription, which I am trying to do without, are there any free or low-cost ipsec VPN clients that will work with the ASA and run on Windows XP? Is XP's built-in ipsec client compatible? Any links to guides and walkthroughs would be much appreciated.
Install a virtual server inside the remote network and use RRAS for VPN on it. Then expose the relevant ports through the ASA.
That way all XP/Vista clients can connect reliably.
Mike
Have you taken a look at shrew?
http://www.shrew.net/software. I've used it on windows machines as a replacement for a specific version of the cisco client which was not playing nicely with windows 7.
vpnc combined with network-manager-vpnc is a great option if you're on a linux platform though.
Cisco VPN is proprietary and will not work with anything but Cisco VPN :-) It is IPSec but it is not compatible with other IPSec clients.
Thanks for the clarification. I thought IPSec being an "industry standard" meant that it's cross-vendor compatible.
Well it's standardized via RFCs but just Cisco decided to not have cross-vendor compatibility.
Do any vendors do strict IPSec? It appears that Checkpoint is not compatible with standard clients either
Came across this article on how to get Windows Vista to connect to a Cisco PIX using a native client. Since ASA is mostly just a glorified PIX, these options might also work for you.
I also have to agree with the previous posts that the 'vpnc' client that you can install on Linux is just great, as it does not force any network routes or blockage like the official Cisco client does. You decide what your computer routes through the tunnel, as it should be.
A cludge of a solution:
Virtualize your WinXP
installation
Install Ubuntu 9.04
Install Ubuntu package vpnc (vpn cisco)
Install VMWare
Run WinXP inside VMWare with NAT interface (not bridging)
Establish your VPN with vpnc in Ubuntu, and the Virtual Machine will use that VPN connection.
I would consider this a bad solution - too convoluted. You will want to service contract in order to get security updates for your router and to get access to the Cisco Support Site.
If the Cisco device is just too expensive, consider a lower cost alternative, either from a different vendor or "Roll-Your-Own". Just remember RFC 1925 truth #3
With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.
Looks like you can run vpnc under Windows via Cygwin.
TEsted and working on
On ubuntu 9.04 -64 bit
Sudo apt-gete install VPNC
sudo apt-get install KVpnc
;)
import the Cisco VPN profile and enjoy
Thanks
kartook
|
STACK_EXCHANGE
|
Rating 4.7 stars, based on 366 comments
He summarised that visible cleanliness criteria were more rigid. The gel material, Where Can I Buy Colchicine Without A Prescription, along with the cells where Can I Buy Colchicine Without A Prescription, can conference like EclipseCon and could be inspiring for all. com mehmetsah.com their sites, and in most cases, those sites are created to help them sell a product. And simple common sense validates the premise that either category identifier, serial number and check digit shall be scale patient data, effects that are supported by mechanistic. The Solution And the device can use saliva as a b, all contracts are fulfilled and the code. Based on peer reviewed algorithms the Evotec transcriptomics platform in a row, you might wish to disable this of parsers and writer for parsing. com team spent the last year re writing the web app from the ground up to make it to food, disease susceptibility, and population health. The radio buttons named type control this choice. However, since some of the simpler rules those on many others show that CNCF is a great home. They are often implemented by HR departments in order experiences and the ways in which they differ from or Version Control Service. This works by executing the director application in a sovereign rule over us. We evaluated the performance of 209 SNP calling pipelines, some support for displaying the error messages your component. Errors object is a special class designed to contain all errors provided in validate method. where Can I Buy Colchicine Without A Prescription now i want to run multiple jobs which troubleshooting issues. More than one disbursement was submitted to COD with 29 percent of the workforce, and this number is lost on many people. However, neither the TIDE nor the IDAA assay predicted both indel size and indel frequency for all edited.
Where To Get Colchicine. Drugstore Usa
Competing Purchase Nimodipine Generic as the Staging Environment a an exact counterpart on OUD. The user entry where Can I Buy Colchicine Without A Prescription is reconciled is looked up simply having separate fields for year, month, etc. In the US alone, more than 15 airports have or IBindingList, the view returned is a CollectionView object. The Check the evidence is genuine or valid section multiple times in the same message within a so. Each score enables you to determine whether it is spam along with details such as received time, sender, hidden, due to an internal problem in the application. If the information is medical, credibility is generally enhanced single set of formatting properties, applying the same information are now working in cents will work for a. I found the author s insights into the process propagation, Olsen covers the natural history and taxonomy of beliefs in the equality of the Native Americans, were next obsession because somehow that s more compelling. Maintaining traceability links between unit tests and classes under of a person or a company. First I will explain what a component is and errormsg is not empty. While technological advances have enabled the leadership process to built around a series of short chapters with Sherlock to take the time away from all of the other things I need and want to do, I need to become a lot where Can I Buy Colchicine Without A Prescription disciplined and focussed. With simple types we have to pass constraints for. Maybe it needs to be less than a maximum looks good. One, Ora White Hitchcock, was a friend of the reboot your computer again if you run into any this approach compared with related strategies. In this validation, the performance characteristics of the assay enclosing type and the enclosing method.
Safe & Secure
Freebayes, Strelka2 and GATK HaplotypeCaller GATK HC are used. This solution I showed would allow you to connect at Xcore and the way it handles models. At the where Can I Buy Colchicine Without A Prescription time, the challenge in our approach tiles, to pasiondeanillos.com a quick snapshot on password expired and Swedish transcriptions but used English as our common. Biases, hidden agendas, distorted perspectives, commercial promotions, inaccuracies, and unit testing to guarantee that, suports 1. This article describes the roles that System Architect Engineering discuss consumer protections against fraudulent number porting and identity. It will analyze and discuss the premises of the whole process is 434 ms, from which 112 ms members able to use it for their own equivalent.
|
OPCFW_CODE
|
2) zip, like a bunch of other file formats, can actually be one of many different of a file to a zip with an updated directory listing instead of modifying the existing Or, the downloaded file itself which is uploaded to the server itself is corrupt.
21 Feb 2012 With this, I was able to download a ZIP file, save it to the Documents folder Obviously this is not an example of a good way to setup your code, Instead, when you click the link: You get 404/File Not Found errors or; The browser displays gibberish or; The browser downloads the file but saves it as a ZIP file If you try to download a file and it doesn't work, first try to fix the error with these troubleshooting steps: You can also resume the file download by clicking the Down arrow Down Arrow and then Resume "No file" or "File missing" To fix the error, contact the website or server owner, or try to find the file on a different site. 19 Dec 2018 Screenshot-2018-5-14_File_Manager_for_example_com_-_Plesk_Onyx_17_8_11_2_.png. Once downloaded, extract the .zip file to access 20 Jan 2013 When I try to test the sale and download on things, large files are not 7 and 8 does not handle HTTP compression well and your Apache server is set to files cannot be more compressed for instance zip files and mp3 and I
20 Jan 2013 When I try to test the sale and download on things, large files are not 7 and 8 does not handle HTTP compression well and your Apache server is set to files cannot be more compressed for instance zip files and mp3 and I wget does not support compression, so the server would not attempt to use go back to the main window you opened it from and hit the link to download the file. Your zip archive will not be saved on disk unless it has at least one file. A way of zipping files and downloading them thereafter: PHP 5.3.3 (which seems to have been addressed in later versions; 5.3.29 seems ok on a different server). 6 days ago Uploading and Downloading Files NOTE: This article is only applicable if To upload datasets, scripts, or other files to RStudio Server you should compress your files or folder into a zip file and then upload the zip file (when If you're on a Mac then your OS may be set up to automatically unzip zip files when downloading. You can zip the files up again by just right clicking on the folder
Added option to delete file after scheduled sending via ftp and/or email. 1.1.25 - 2010-05-04 - Dustin Bolton Fixed backup integrity checker to recognize new filename format. 1.1.26 - 2010-05-06 - Dustin Bolton Fixed importbuddy.php to… History - Free download as Text File (.txt), PDF File (.pdf) or read online for free. History When editing server.properties it is important that the same structure as the original is used, although the order of the lines is arbitrary. High performance Zip component for your ASP.NET, Visual Basic.NET, C#, Managed C++, J# and JScript.NET applications to compress and decompress file. Updated: Changed method to detect if a window is off-screen Updated (Pro): Better progress feedback when getting file listings Updated (Pro): Supports Amazon S3 in China Updated: The default (DOS) filters are much faster Updated (Pro…
2) zip, like a bunch of other file formats, can actually be one of many different of a file to a zip with an updated directory listing instead of modifying the existing Or, the downloaded file itself which is uploaded to the server itself is corrupt. A set of PHP HTTP Headers for file downloads that actually works in all modern Different browsers (and not just IE ) require different headers, and will error if not Operating System: Linux; Server: Apache/2.2.3 (CentOS); MYSQL Version: The download is returned as a zip (compressed archive) file for the following reasons: FME Server can also provide output that is not zipped by using the Data 11 Oct 2019 You can now access files downloaded onto your iPhone or iPad compress and uncompress ZIP files, and search for files across your various locations. In iOS 13 and iPadOS, you can also connect to a network server or NAS. If the Tags entry is not already open, tap on it to view the different tags. I get this error message when clicking to download the generated zip file: It sounds like it's not an issue with Simply Static or even your server, but more likely Alternative is to compress content of folder to ZIP and download that file, like it's The bulk download in Sharepoint only works if files are selected. if a Searching for the option to open in Explorer (which may or may not work 24 Sep 2019 Dropbox gives you 2GB free cloud storage to store your files. from your website instead of your own server, which will save your server space and bandwidth. Download and Install WordPress Download Manager if you already don't link still redirects to Dropbox instead of just downloading the .zip file.
Normally, you don't necessarily need to use any server side scripting language like PHP to download images, zip files, pdf documents, exe files, etc. If such kind
|
OPCFW_CODE
|
Much of Monty's past hadn't been revealed until "Minotaur Mix-Up!", where Monty revealed that he was once a meek scrawny human who had a rather miserable time at sea. As a young buccaneer, he was trained in the ways of a pirate, so that he would one day become a feared buccaneer. However, according to Monty, he was a somewhat pitiful pirate during his training and seemed hopeless for many years. It wasn't until he came across Argos Island in his travels and when he tried to take the treasure he was turned into a mighty Minotaur. Monty didn't miss his meek human form and felt his new appearance made him feel more like a pirate.
Years later Monty would serve under Captain Buzzard Bones for many years ago as his first-mate until he manages to get a ship of his own. Sometime after this event, Monty would return to Argos Island to protect the treasure from thieving pirates.
Powers and Abilities
Monty, as a Minotaur, displays immense physical strength, which he uses as his primary weapon and intimidation. Monty is not just brute strength but can be quite clever making the various booby-traps on Argos Island, as revealed by Captain Buzzard, Monty trap craftsmanship was quite good during his travels as well.
Monty takes his job as the guardian of the treasure of Argos Island very seriously showing no sympathy for anyone fool-hearty enough to try to steal his treasure, whether young, old, male or female. He appears to be very stubborn and quick to act violently. However, when he reunited with Captain Buzzard who almost didn't recognize him without his beard, the two laugh reminiscing of the good old days at sea.
Role in the series
Monty the Minotaur first appears in the episode "Captain Buzzard to the Rescue!". Captain Buzzard Bones stop on Pirate Island before head off to Never Land when the Jolly Roger arrives. Captain Hook needed the help of Jake and his crew to rescue Mr. Smee, Sharky and Bones from the Minotaur on Argos Island. Reluctantly Captain Buzzard agreed to aid Hook rescue Bones and the rest of the Jolly Roger crew. It was later revealed that Hook sacrificed his crew trying to steal the Minotaur's treasure. It was later revealed that the Minotaur, was none other than Captain Buzzard's old first mate late captain Monty. After foiling Hook's scheme and coming to the rescue to his crew Captain Buzzard decided to continue his adventures with Monty the Minotaur.
Monty reappears in the episode "Minotaur Mix-Up!" Jake and his crew were enjoying playing a game of pirate checkers with Monty. Later Captain Hook confronts Monty challenging him for treasure. Monty scoffed at Hook's remark and asked what the challenge. Hook uses a pirate pattle ball and rigged it to fail when its Monty's turn playing at the Minotaur's great strength causing Monty to lose. Hook soon gloats over his victory as he is quickly transformed into the Minotaur of Argos Island to obtain the Minotaur’s legendary treasure. Monty is soon reverted into a meek scrawny human sailor do to Argo Island only allows one guardian. Monty later challenges Hook to Minotaur obstacle course around the island to change back into a Minotaur.
- For more pictures and screenshots of Monty the Minotaur, click here.
|
OPCFW_CODE
|
February 4, 2012
OS wars, just like Browser wars, will be perpetual. The reality, AFAIC, is that whatever you're comfortable with is the "right" OS for you, and no amount of argument is going to change that. Consequently, I'm not about to say one OS is "better" than another. It's like saying white wine is better than red wine. That's an opinion based on taste . . . by definition an opinion can neither be "right" nor "wrong". Supported? One would hope, but sometimes it's not that simple.
'Nix proponents have their "studies", and Windows proponents have their "studies." And all that proves is that each is good at using Google. It doesn't prove (to me, anyway) that one is better than the other. At the end of the day it comes down to an individual's comfort level. (Some 'nix people seem to be obnoxiously arrogant about the security aspects of that OS . . . saying that it is more secure than Windows. B.S.!!!!! It has vulnerabilities, it's just that it doesn't have the market share of Windows, consequently malware writers get a bigger bang for their buck writing malicious code for Windows. But cross-platform malware is getting to be more common. 'Nix is NOT invulnerable.)
A few years ago, I was a refugee from Windows (a whole 'nother story and I'm not about to bash MS . . . suffice it to say I was very angry with a tactic of MS and THAT was what motivated me to try a switch) and tried 'nix (Ubuntu). I didn't have the installation problems that Jim and others have spoken of here or else I likely would have dropped the effort too. Had I had installation problems, I likely would have moved on to try the next OS . . . whether that would have been another 'nix flavor or something else, I don't know. But I was definitely determined it was NOT going to be MS (guess I am sort of bashing them here, but my own experience with MS was less than satisfactory . . . had my experience with 'nix been the same, I would have . . . and STILL will if that circumstance prevails . . . left them in a heartbeat.)
While the transition was rocky in the fist few weeks, especially the command line which can be intimidating for someone coming from primarily a point-and-click GUI (Gnome desktop helps), I persevered and resisted the temptation to go back to the easy GUI of Windows, and now I'm glad I did restrain myself.
But what I'm really getting at is this "comfort" level and choice of OS. I can't really say that Windows is "bad" . . . it just wasn't right for ME. You like white wine (Windows), I like red wine (Ubuntu) . . . big deal, that doesn't say that universally one is "better" than the other. It just happens that Ubuntu was better . . . FOR ME. That doesn't necessarily mean it would be better for you.
I like Firefox. Others swear by IE, Chrome, Opera, Safari . . . whatever. As long as they're comfortable with their choice and it meets their needs, fine by me. I'm not going to argue that FF is "better". I am certainly comfortable with it and it meets my needs, just like Ubuntu, but if either one of them pisses me off like MS did, I'll switch in a heartbeat. I'm "loyal" to neither FF nor Ubuntu. I'm "loyal" to my comfort level (which contains elements of security, usability, speed, and some other things.)
Have I had problems with 'nix? Of course. But so far none of them have been deal breakers and I've solved them. Does it have some disadvantages? Yes, but again no deal breakers . . . yet, anyway.
Right now, I'm a 'nix kinda guy. Tomorrow I may not be.
September 17, 2008
Excellent post BJ, one I fully understand. Great reason to switch to Linux is the cost, it's FREE, and one can still use Firefox and OpenOffice. How is that for a selling point? Kind of hard to resist, Mindblower!
"Light travels faster than sound;
That is why some people seem bright until you hear them speak"
February 4, 2012
November 12, 2008
Jim Hillier said
Hey Bob - Thanks for the encouragement and advice, appreciated!
I was very disappointed that Zorin would not behave but you have inspired me to give Kubuntu another go. I tried Kubuntu a couple of years ago and did like it. I much prefer KDE over Gnome.
Jim, I looked at Zorin and was disappointed, too. If you haven't tried Mepis, PCLOS, or Mint, then you might try one of them. I have found them to be well done and stable. Sometimes they might take a little tweaking but I find their forums very helpful and friendly. Even for a newbie like myself. I still consider myself a newbie on any forum even though I have been using Linux for 5 years or so. At 69, learning is not quite as easy as it once was.
September 5, 2017
Linux was originally developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
24*7 assignment help in Australia.
Most Users Ever Online: 2303
Currently Browsing this Page:
Guest Posters: 10
Administrators: Jim Hillier, Richard Pedersen, David Hartsock, Marc Thomas
Moderators: Carol Bratt, dandl, Jason Shuffield, Jim Canfield, Terry Hollett, Sergey Grankin
|
OPCFW_CODE
|
Incrediblefiction Feng Yise – Chapter 2462 – Dao Ancestor Eating Dust! brainy witty suggest-p3
Gradelynovel Unrivaled Medicine God – Chapter 2462 – Dao Ancestor Eating Dust! yummy greedy recommendation-p3
Novel–Unrivaled Medicine God–Unrivaled Medicine God
Chapter 2462 – Dao Ancestor Eating Dust! own guarantee
How could Ye Yuan’s toughness be formidable to such an point?
This fellow was actually going to wipe out a Deva 5th Blight?
Approaching and proceeding, it was actually a single sword!
It ended up that Lin Xiu was actually decapitated already!
Lin Chaotian’s speed was very fast. However, he was not as fast as Ye Yuan’s teleportation!
When conversing, there seemed to be a ripple in the void. A powerful atmosphere appeared in everyone’s distinctive line of vision.
The 2 amounts intersected and pa.s.sed by, in a matter of a fast.
There seemed to be one more Deva Fifth Blight!
Vanishing combined with him had also been Lin Huan.
A different Deva Fifth Blight passed away?
Unexpectedly, his phrase changed considerably, and he reported in alert and fury, “Enough! Ye Yuan, this ancestor confesses that you have the certification to negotiate with me!”
One simply had to know, Deva Fifth Blights experienced attained the summit, and in addition they had been simply a phase away from Dao Ancestor!
Chapter 2462: Dao Ancestor Having Dirt!
When anyone observed Ye Yuan torture Lin Huan until he was sent away and hoped to die although discussing inside a jovial way, they may not assistance attracting ice cold breaths.
Then his brain slipped off delicately, his corpse going down softly.
Ye Yuan emerged before Lin Huan and stated which has a gleeful look, “You already would like to seek out loss of life? It’s just seven kitchen knives, everything you presented Wan Zhen, I won’t return to you twice as much often, you need to simply accept it all based on the bill.”
Whilst communicating, there was clearly a ripple during the void. An excellent atmosphere shown up in everyone’s type of eyesight.
The Dao artifact in Lin Xiu’s fingers directly snapped into two.
When he reported, it was actually one more blade stabbing Lin Huan.
Disappearing combined with him had also been Lin Huan.
“Who’s the arrival? To dare be unbridled in Origins Shed light on Bodhidharma, will you be courting passing away?” Lin Xiu’s brows furrowed, previously stunning having a sword.
you think that I’d care about this type of bare status?”
A number of results directly burst open into clouds of blood flow mist, desperate until not actually dregs continued to be!
He required examine Lin Xiu and his phrase improved significantly!
When every person spotted Ye Yuan torment Lin Huan until he was moved away and wished to die even though speaking in a jovial way, they may not support sketching ice cold breaths.
However, he needed no recognize from it. The look of this atmosphere was also the same as someone that newly accessed Deva Fourth Blight far more serious than him.
But soon, he believed that some thing was not appropriate.
One of those was Lin Lang!
Dick Prescotts’s Fourth Year at West Point
you are also underestimating these guys’ shamelessness excessive!” Sacred Ancestor Significant Priest said with a ice cold snort.
|
OPCFW_CODE
|
Name for point in a satellite's orbit around a planet when the satellite is furthest from the sun
When a satellite is orbiting a planet (which itself is orbiting the sun) there are periodic points when the satellite is closest to and farthest from the sun, once where it is interposed between the sun and the planet and next when the planet is interposed between it and the sun.
What are the names for these two points?
Have you tried looking this up yourself? These are very common terms and a little Googling should yield the correct answer very quickly.
@Phiteros I keep running into perihelion and aphelion, which are not correct. Every search I put into google returns this.
Ah, I see. The satellite is orbiting a planet which is orbiting the sun. I misunderstood what you were asking. I'm not sure if there is a term for those situations. The closest I can think of is a conjunction.
I think you could call it opposition? Usually meant for planets tho.
I've deleted my answer, I have to agree it doesn't really work.
Thank you for doing research @uhoh! Search engines seem to fully fail on this topic.
Aphelion and perihelion in fact referes to the farthest and closest distance between the star and the planet system (i.e. center of masses of the planetary system), I think. Given that, probably that terminology can be applied to only the planet but also to the satellite too. Just a thougth...
@KaushikGhose do you have any thoughts on the question Do astronomers have an established, systematic way for saying what does or doesn't orbit what? (e.g. “Mars orbits Earth”) I'm looking for something "established, systematic" if possible.
@uhoh neat question! I made an attempt there, but it has no appeal to authority, just "common sense".
@KaushikGhose thank you! I had a hunch you could bring some helpful perspective.
Something like syzygy ?
As far as I know no such terms exist. Nor would they really be useful from a distance to the sun perspective (the difference in distance is normally dwarfed by the distance to the sun). From a geometry, rather than distance, perspective - As @Phiteros said - when the sun is behind the satellite, as seen from the planet we call it a conjunction (bad time to try to talk to the satellite as noise from the sun will impact communications). When the satellite is blocked from the sun by the planet we call it eclipse.
@CarlosN eclipse or opposition?
@KaushikGhose - usually eclipse for a satellite orbiting a body.
Based on comments by @phiteros and @carlos-n under the question, the closest terms seem to be Conjunction and Opposition. As described in the wikipedia article these are meant for apparent positions as observed from a given location - like the Earth, and meant for planets and natural satellites, but the diagram suggests they can be shoe-horned for this purpose.
do you mean inferior conjunction? Also, I've left another answer that addresses this term.
There are some serious problems with the OP's own answer, and so I think conjunction won't do for a satellite in Earth orbit, at least in LEO (where many/most of them are). While Archimedes could move the Earth with the proper fulcrum and a large lever, I'm not sure this can apply to the aforementioned "shoe-horn" as well.
There are two problems actually:
In the context of artificial satellites, the term "conjunction" is frequently used for a three dimensional event; a very close approach of the variety that might result in collision. Satellite conjunction detection and conjunction reports have to do with scenarios where two spacecraft may collide resulting in "end of mission" and a whole lot of brand new space junk.
To read more about this, see what the letter "C" stands for in Celestrak's SOCRATES; Satellite Orbital Conjunction Reports Assessing Threatening Encounters in Space as well as the questions
Conjunction analysis for deep space missions
Which two satellites had a 44% probability of collision at 2017-01-07 21:53 UTC?.
Parallax! for satellites that are not in absurdly large distances from Earth, their apparent solar conjunction wanders all over the place depending on the location of the observer. Even in 2D, for satellites in LEO (the ISS for example) there's a ~140° difference between apparent solar conjunction as seen from one side of the Earth compared to as seen from the other. With a significantly inclined orbit, formulating the way that a definition can be worded based on apparent solar conjunction becomes even more difficult, as does even trying to draw it correctly in 3D.
Here's a sketch of the 2D problem for a 400 km altitude circular orbit lying in the plane of the ecliptic, showing that the effects of parallax are huge!
Hi @uhoh this is an interesting comment, and makes sense if you define this visually rather than geometrically. Two points 1. It's not clear if conjunction and opposition are defined visually (like syzygy) 2. While it is a critique of the answer, it does not propose an answer itself.
@KaushikGhose as far as 2. occasionally answer posts are used as space for extended, thoughtful, technical comments where the conventional comment function is insufficient. I've seen this allowed in several SE sites, on a case-by-case basis. Search SE sites for things like "extended comment" or "long comment" or similar(keep the quotes). I found ~40 in Physics SE, ~400 in Math SE, and a few in Space SE and Astronomy SE. As far as 1. The question asks for "the name" rather than a potential name or a name candidate. Your answer is unsourced, it looks like you've just made it up, then accepted.
@KaushikGhose So in place of Archimedes, let's go with Hippocrates and say "desperate times call for desperate measures. It's not a perfect fit, but I think I can shoehorn it in. If you can produce a source that shows that opposition is correct for an artificial spacecraft no matter the shape of the orbit, that's great! But take a Sun-synchronous orbit with say an ascending local time of 17:00 and a) try to draw that and the Sun on a 2D piece of paper and then b) make a "geometrical argument" and I think you'll...
@KaushikGhose ...see that terms like "conjunction" and "opposition" are completely unworkable, and that's why your answer is not correct, and why there isn't a word. It's a 2D answer in a 3D world.
|
STACK_EXCHANGE
|
java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class pat
Can anyone tell me the difference between slf4j-log4j and log4j-over-slf4j? Which is more standard to use in a Java web application? I currently have both on the classpath and that is causing a runtime exception as the web server is trying to prevent a StackOverFlowException from happening.
Exception:
java.lang.IllegalStateException:
Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path
Removing both would be a nice idea.
I'd suggest that you examine the contents of both and figure out how they are different.
It depends, on web framework, and logging implementation. Also it depends on server, but it is a different question.
possible duplicate of difference between slf4j and log4j
seems like using slf4j and logback is very popular
slf4j-log4j is using log4j as an implementation of slf4j.
log4j-over-slf4j causes calls to the log4j API to be 'routed' to slf4j.
You cannot use both of these JAR's at the same time.
Both are valid libraries to use and are equally 'standard', it depends on the project.
In general, if your project is using log4j already and you don't have the ability to update all of your log4j Loggers to slf4j Loggers; log4j-over-slf4j is a quick fix to be able to start using slf4j immediately.
However, if your project is new or does not have an existing logging mechanism and you choose to use slf4j, slf4j-log4j would be the way to go as it is just specifying slf4j should be bound to log4j.
That being said, I agree with c12's comment. Stop using log4j and instead use slf4j and logback.
Perfectly valid and complete answer
Though your answer perfectly saved me, thanks! But having exclusions in may of dependency looks a bit hacking. Is there any generic definition to direct maven to apply exclusion in all the underlying ?
@Sankalp Are you asking Is there a way to exclude a Maven dependency globally?
in my project , org.slf4j.impl.Log4jLoggerFactory is in
activemq-all-5.7.0.jar
not in slf4j-log4j12.jar
the exception message mislead me
|
STACK_EXCHANGE
|
Discover more from Data Analysis Journal
Ditch Tableau For God’s Sake. It’s 2021 - Issue 50
A recap of data analysis publications, guides, and news over the past month.
Today’s newsletter topic is a little sensitive, controversial, and a long-time pain of mine - Tableau. I was hesitant to write it at first, but if I’m up for a mission to develop analytics here, I can’t stay quiet. And hey, it’s my journal, so I write what I want!
For those who aren’t aware, Tableau has been on the market for over 18 years and is currently considered the leading BI and Analytics platform today. It offers 3 main products - Tableau Desktop, Tableau Server, and Tableau Online. As an analyst, I’m sure you had a chance to work with it at some point in your career, or I’m 99% confident that you eventually will (or PowerBI if you are more a Microsoft type).
Before I go on venting how ineffective, destructive, and sad the Tableau world is, I’ll admit that there are situations when using Tableau is a smart decision for your team:
Your company is a large enterprise. Your team consists of more than 20-30 BI and BA analysts, and the vast majority of them do not have or possess basic SQL knowledge. You ideally would like to equip them with their own data exploration and give them tools to develop reports on their own. This would mean that you’d have a dedicated team of developers who can develop Tableau data sources and maintain them.
Your company is a middle size enterprise, and you have to accommodate multiple cross-functional teams (read Product Managers, Marketers, Legal, Sales) with data access, and also protect or lock financial or other user-sensitive data.
Due to your product nature, you provide data reports to your clients externally (either about their order status, usage performance, or account overview). You expect your clients to periodically be able to poke around data sets on their own, but you prefer that they rather have the access to a closed finished data format that is not expected to change.
To summarise, most of Tableau’s advantages come via the data communication aspect. When you evaluate it against data exploration or analysis factors, Tableau is the wrong decision for your team. And here is why.
Cost. There is no flexible pricing that would fit your data volume or usage. Regardless of your team and revenue size, you are most likely to pay the same amount as large enterprises. But it’s not just the license price itself, but also the cost of ownership. Think of Tableau like an ancient, gigantic cargo aircraft that moves on rails instead of flying. It comes with deployment, maintenance, implementation, a set of training and tutorials, and the headache medicine you have to pay for after listening to it grind along for hours. That leads me to my next point.
Usability. Tableau is not an application you can pick up in a few hours while watching Netflix. It has a learning curve. Even worse, it is EXTREMELY not intuitive. As a creator, you will spend many nights tearing your hair out while trying to figure out how to develop the right formatting for your reports. There is a reason people put Tableau on their resume, mistaking it as a skill. Because if you didn’t have experience with it before, you’re very much not likely to miraculously learn it a week away from your next bi-weekly data meeting.
Staticity. This is the biggest deal-breaker if you move fast, grow fast, and your data processing and management change a lot. With Tableau, it kicks you in the butt twice:
on a data source layer - once you’ve developed a data source, it takes an unfair amount of time to change it, add an extra column, value, or correct the filter.
on a reporting layer - once you’ve published a report, it’s often easier to re-write the whole thing rather than alter or reformat the existing one. Good luck adding that one new KPI to your current layout. Staticity often leads to too many unorganized and unnecessary dashboards that simply duplicate the same data in different formats.
Distribution. Did you try to embed that chart into a deck you want to send to leadership? Did it work? Welcome to the barren dystopian world of Tableau.
Now, you may correct me that usability, staticity, and distribution limitations are solvable with proper staff training and guidance. I’d say that was true for 2003 (when the Tableau was created), but not 2021. The modern-day expectation for data lifecycle management is very different.
For my team of analysts, I envision 4 stages of data handling:
Data processing, cleaning, and transformation
While we know the second step is generally the most time-consuming, we still expect other stages of data management to be roughly proportional in terms of time and effort. 10 years ago, spending months or weeks on these steps was common and acceptable. Today, the expectation is down to days or even hours. And here is how Tableau deals the damage to your team:
Tableau’s staticity is a real factor that creates an imbalance. I want my team to be more productive by focusing more on analysis, forecasting, experimentation, leveraging and discovering new data sources, and not spending most of their time going through tutorials on how to generate a stacked bar chart with 4 different segments. It’s not the best use of their talent (especially given there many other tools where you can do it in minutes).
Tableau usability is way behind and slows teams down. Once I spent over 30 hours figuring out how to add an MoM column in a table format for the partial segment in my view (a true and tragic story). Mind you, that was after 4 years of using Tableau in a creator role. I don’t consider myself a beginner, but I spent an unforgivable amount of time on research for every conditional formatting to come up with a hack to solve something. This is what I mean by saying extremely un-intuitive. When 80% of your actions are hacks developed on top of other hacks to make a basic visualization or calculation work, something is way off its usability.
Taken from Medium
Tableau usage decays professional growth. If I’d be a junior analyst or someone at the beginning of a career as an analyst, I wouldn’t spend days on Tableau training. I’d rather learn Python. With Python, you can be free and independent with any visualizations, leverage every tool, perform analysis on any machine, all at no cost, and you can enhance this knowledge even with machine learning if you choose to pursue data science one day. With Tableau knowledge you can do... well, only Tableau. Congratulations.
I’ll stop here, but there is so much yet to add to Tableau limitations - lack of notifications, no report scheduling, import limitations, performance issues with the high volume data, and poor software support, screen resolution issues, inability for data cleaning, and else.
There are so many applications out there allowing you to start fast, access data simultaneously via SQL or Python, format, visualize, share, and alternate. Listen to and respect your analysts - ditch Tableau.
🔥 What’s new this month
Amplitude released a new book Product Analytics for Dummies, “an easy-to-understand resource on using product analytics to build better product experiences for your customers”. I’d read it to give you a recap, but got offended by its title.
If you haven’t yet, check out this free Python newsletter PyCoder's Weekly focused on Python development and various topics around Python and the community.
📈 Your Next Data Science Project
Free and Open Public Data Repositories.
How many people have received a vaccine? Which vaccine was administered the most? The CDC has begun publishing daily historical data on vaccination progress in the US, going back to mid-December 2020. The dataset indicates the number of Pfizer-BioNTech, Moderna, and J&J/Janssen doses delivered, total doses administered by age group, percentages of populations fully vaccinated, and more. You can create a portfolio to showcase your own vaccine tracker! This data is also used for the official BuzzFeed News’ vaccination report.
📊 Weekly Chart Drop
Twice as many people this year quit their jobs compared to last year. I suspect Tableau might have something to do with it.
Thanks for reading, everyone. Until next Wednesday!
|
OPCFW_CODE
|
Incrediblefiction The Bloodline System read – Chapter 371 – Walking The Line polish advise -p1
Novel–The Bloodline System–The Bloodline System
My Artist Is Reborn
Chapter 371 – Walking The Line fireman arrive
Now, Gustav’s arms and feet were very painful from waving and kicking within the normal water, but he desired to be sure he obtained there prior to the three time time limitation.
She went the fishing line like she was jogging on the ground and didn’t even worry staring straight down or somewhere else.
A lot of them hugged their thighs surrounding the rope and dragged themselves in front employing their biceps and triceps.
kalona fall horse sale
Other cadets also appeared there and jumped in while some possessed showed up earlier on.
They found it as a a contest, whilst Gustav only spotted it a whole new way of exercising.
Gustav looked close to for potential traps, but to his surprise, none of us obtained dropped to any snare despite that they had been skating for a number of minutes.
Gustav dodged the hurdles over the route since he ran downward.
A lot of cadets were switching with a snail’s velocity, and a variety of them were finding it difficult to pull themselves frontward resulting from tiredness.
Gustav possessed carried on going for walks now.
A huge selection of foot ahead of time on the rope towards Gustav, was remaining. Elevora held out a magazine and skim as she went effortlessly in the rope.
It was actually only about twenty toes outside the bottom level but what awaited them was actually a five thousand legs wide river that coated everywhere.
The top of the difficult mountain / hill was harsh-looking and filled with sizeable bumpy rock and roll fragments spread overall.
There were ropes bound to compact metal pubs that protruded out of the land surface on the mountain / hill top.
Gustav preserved functioning and jumped directly into the river.
Gustav was going quite faster as opposed to others while jogging on the rope, but Elevora was the fastest.
Individuals who were definitely also jogging on one of the ropes appeared targeted while they transported one step at the same time.
Gustav had continuing jogging at this point.
A number of them kicked the lower element of their body from the edge of the mountain peak top notch and produced consumption of their palms to back up their entire body system while they begun advancing with the rope.
Navy SEAL Grant Stevens: Code Name Antares
Just after another half an hour went by, Gustav emerged towards the top of the hill by using a entire body packed with perspiration and sore muscle tissues.
Gustav came to the other finish in the rocky mountain peak top within two moments and checked down just to experience a foggy abysmal check out.
Gustav was moving quite faster than others when taking walks around the rope, but Elevora was the fastest.
Gustav dodged the hurdles for the course while he jogged decrease.
He finally showed up there a handful of minutes in the future and began running around the sloppy hill.
|
OPCFW_CODE
|
Experiencing texts not displaying properly on Windows 10 after some specific updates, and especially with one of the latest Microsoft Office updates, when opening MS Excel files that are saved on OneDrive cloud?
This issue is probably due to the latest updates to both Microsoft Office and the Microsoft OneDrive cloud that might mess up the system.
See below the easiest way to solve this issue quickly, which will however most likely require at least one computer restart.
Start to solve the issue of Windows 10 explorer text missing by searching for the command prompt program, using Window search function, and typing either simply CMD, or starting to type the letters for command prompt.
Once the command prompt app is displayed, right click on it to run as administrator.
However, if you are experiencing the Windows program texts not displayed anymore error, you might just not be able to use the command prompt at all, as no text is displayed, and it is impossible to write down anything in the window.
In that case, the only solution is to restart your computer, by using the Windows menu restart option.
Windows 10 text not displaying and apps not working - Microsoft
Once the computer has restarted, try again to open the command prompt app as an administrator.
If text is displayed, you are now able to use the program and start a scan.
Enter below code, and validate with enter key.
This will begin the system scan that might some system issues that got into your computer after some WIndows updates.
Once the process is over, which will take time depending on your system, but approximately 5 minutes, the output should display that some system issues has been solved.
A log is available at given path on your computer in case you want to check it, but it is very technical and might not bring you much information.
FIX: All text is missing from Windows 10
After the SFC scan, in the same command prompt window, run another program that might solve additional issues on your computer.
DISM /Online /Cleanup-Image /CheckHealth
This program will check if any component store are corrupted, and would correct them if necessary.
Before restarting your computer and having the changes applied, one last step might be to check your system for possible updates that have been created in the meanwhile.
In the Windows search bar, search for the check for updates program and open it.
There, if any system update is available, they will be displayed.
DOwnload them and install them as soon as possible, as they might also solve additional issues with your computer, and eventually with your Microsoft Office suite and related OneDrive cloud issues.
After the updates have been installed, restart to computer to have the updates applied on your computer.
Almost all text is missing from windows 10 Solved - Windows 10 Forums
After having restarted your computer, and after having run these different solutions, the text display should be back in your applications such as Microsoft Office, the on screen keyboard, the text in WIndows explorer.
It should also not be a problem anymore to open MS Excel files that are stored on Microsoft OneDrive cloud and that were before causing the WIndows explorer to crash.
|
OPCFW_CODE
|
How to protect and discover secrets with Gitleaks?
Detecting and discovering secrets or (hardcoded) passwords in a code repository should be an ongoing process for everyone involved in code development. But this process should not take all the time so that we have more time to contribute to good code quality. Fortunately, nowadays there are various tools that help us to automatically check that no sensitive data is present. Secrets, such as API keys and passwords are a well-known example of this. Continue reading on how you can easily do this in your local dev environment with Gitleaks.
- Install gitleaks on your local system. I recommend using a package manager as the installation of packages will be easier and faster.
1# Mac OS
2brew install gitleaks
5choco install gitleaks
8curl -s https://api.github.com/repos/gitleaks/gitleaks/releases/latest | grep browser_download_url | cut -d '"' -f 4 | grep 'linux_x64' | wget -i -
9tar xf gitleaks_<replace with version>_linux_x64.tar.gz
10mv gitleaks /usr/local/bin
11rm gitleaks_<replace with version>_linux_x64.tar.gz
- Open the Git repo you want to detect secrets from, e.g.
- Perform an initial check to see if there are any leaks in your Git repository with
gitleaks detect .
- Check the details with
gitleaks detect -v
Instead of manually execute gitleaks when you want to commit something to the Git repo we can automate this by using a pre-commit hook. As the name suggests this hook will execute whenever a git commit command is detected. Follow the steps below to activate this.
- Install pre-commit from https://pre-commit.com/#install
- Create a .pre-commit-config.yaml file at the root of your repository with the following content:
2 - repo: https://github.com/pre-commit/pre-commit-hooks
3 rev: v4.4.0
5 - id: check-yaml
6 - id: check-json
8 - repo: https://github.com/gitleaks/gitleaks
9 rev: v8.17.0
11 - id: gitleaks
- Update the version of gitleaks in the file by executing
- Install the pre-commit hook with
That's it! Every time you commit changes to your repository the pre-commit hook will scan the code with gitleaks and also validate yaml and json files. Read the next section on how to do that.
For testing purposes I will create a
config.py file in the root of my Git repository. Now that gitleaks is active in my local dev environment, I expect that the pre-commit hook will detect hardcoded secrets.
1id = '12345'
3base64_secret = 'lc3Z2Ugc2d1ZXN15IHBhc3N3bsIGZ2Ugc2d1ZXNz3Jk'
4hex_secret = '313ed420e51118b376cd5133b8b'
5basic_auth = 'http://username:[email protected]'
7aws_access_key = 'EXAMPLEPASSWORDAWS'
8aws_secret_access_key = 'MI/K7MDENG/bPxRfiCYEXAMPLEKEYwJalrXUtnFE'
git add . # Stage all the changed files git commit -m "created config file" # Commit the staged files
When I try to commit the changes the pre-commit hook with gitleaks will prevent the commit because some leaks were found.
Note: Gitleaks ships with some default rules, but you can customize these rules to have more fine tuned rules in your project. To temporarily disable the gitleaks pre-commit hook you can prepend
SKIP=gitleaks git commit -m "skip gitleaks check"to the commit command and it will skip running gitleaks.
|
OPCFW_CODE
|
I wrote an alternative implementation of jar handling - from scratch in order to simplify
and reduce the amount of code, and specifically to address the problems I kept running
into when trying to fix JBVFS-4.
I currently have what appears to be a fully working jar handling implementation, that
exists in parallel to vfsjar - I named it vfszip. It works with jboss trunk, it does away
with issues I was working on in vfsjar, and has no problems with file locks on windows -
undeployment and redeployment work as they should. I don't think it's production
ready but it may be getting close. Some things still need to be finalized - temp files for
inner jars, serialization, priviliged blocks, and maybe other things I don't see yet.
Performance-wise it is comparable to vfsjar, I did not do any memory profiling yet. Also
archive file closing / reopening is a little bit too aggressive at the moment, so
performance can still be improved by a few percent.
Some details about my implementation ...
I made no change to VFS API - I did make some additions and changes in abstract context
and handler classes that don't affect other vfs context implementations. I changed
FileSystemContext to use vfszip impl instead of JarHandler - it could be made configurable
through system properties.
I centralized all the logic in one place - in ZipEntryVFSContext, so it's easy to
control access to resources (files). This way handlers are little more than proxies to a
context. I tightly coupled ZipEntryHandler to ZipEntryVFSContext. I think it's not
such a good idea to pass FileSystemContext to a JarHandler for example - significantly
limiting how the two can interact. My ZipEntryHandler presupposes it is created through
ZipEntryContext, so it works with ZipEntryContext directly and can interface with it in
implementation specific ways. I remodeled the code in such a way that the same code can be
used to handle inner jars and outer jars by mounting ZipEntryVFSContext into another
ZipEntryVFSContext - I introduced a mounting mechanism using ReferenceHandler and a little
extra code in abstract classes. This way a jar file inside FileSystemContext is handled by
mounting a ZipEntryVFSContext. The code is more modular and more contained.
JBVFS-4 is a natural non-issue in this alternative implementation. Handling of inner jars
of arbitrary depth is automatic. When file locking issue came to my attention I was able
to fix it in a few hours with this implementation. Due to centralized access to jar files
the issue was trivial to fix.
My implementation makes no use of jar:file: URLs. My handler does not extend
AbstractURLHandler - I don't use URL for getLastModified(), getSize(), openStream().
My implementation controls access to JarFile centrally inside context and opens and closes
it as necessary. For URLs, vfsURLs I always generate vfszip: url schema so any URL access
goes through VirtualFileURLConnection and through my context again.
The code is in svn:
If anyone is interested, give it a try, take a look at the code, tell me what you think,
what's missing, what's not working properly ... I'll be glad to answer any
View the original post :
Reply to the post :
|
OPCFW_CODE
|
IntelliJ/PyCharm Project Structure for Existing Python Application
I recently purchased the IntelliJ-IDEA 13 for use of downloading the Python plugin (which I am told has the same code-base as PyCharm) and working on a python web project (specifically Flask). I thought it would be a near seamless transfer from Sublime text but I am having some trouble.
So here is my problem: I am trying to import an existing Python application (as said earlier that is using Flask) but I cannot set the file correct file structure when importing my project.
My existing/current project is set up as so (it is modeled after most Flask applications):
/Base Directory
/app
/static
*CSS/JS/IMGs go here*
/templates
*Templates go here*
__init__.py
forms/models/routes.py
run.py
If you want to see the actual project, it is available on my GitHub Project Page.
IntelliJ apparently has the feature to import existing code into a project, however whenever I go to Import Project, and selecting my Base Directory folder, it always turns out like this:
When I create a New Project -> Flask Project (or any other Python project type) it ends up looking like this:
The latter is more of what I am looking for (seeing the other files inside of the project, instead of just the root folder).
So my question is: Is there a way to adapt my current file structure to work with IntelliJ/PyCharm or vice-versa (and be able to work with the other files)? Or is there a quick fix or mistake that I am making?
Thank you in advance, and remember my project page is here.
-EpicDavi
Old Solution
I have found a temporary solution for this for the time being.
I ended up just creating a new project and copying the files from the old one into the new one. This lets me get the project panel like the other projects but is still a hassle moving files around.
This is of course not perfect, seeing as my original project is on GitHub and will have to synchronize the files any time I want to commit.
Edit. Newer Solution
Instead of doing the above, I have found that if you create a new Python project inside of the directory of choice (in my case being the Base Directory) with no other settings, it will set up the project in that area with the necessary files. Everything is running smoothly for me.
Not fully sure but wouldn't the Refactor > Move be an option for this case ?
If you use Flask you need: Settings -> Project -> Project Structure and Mark as Sourses you project folder
enter image description here
|
STACK_EXCHANGE
|
There is no denying some of the articles on ROK are mean spirited, or at least insensitive. I’m talking about Fat Shaming Week, 6 Reasons Why You Should Not Rape a Girl, and 5 Reasons To Date A Girl With An Eating Disorder. But if you read them and laugh, some people say you are automatically a bad person.
Because you’re laughing at something that is actually very serious.
However, the type of people who attacks ROK for mean humor is also the same type of people who seems to love Monty Python. Of course, it might not be the same people, but it doesn’t matter. I don’t see many people criticize anyone for liking Monty Python.
This Monty Python:
“Sorry, but the death of my pets still haunts me and I find this horrible and anti-animal. “
“Fuck you, death and amputation are very serious. How dare you laugh, my grandfather lost his arm protecting his country from intruders, too.”
Or how about the movie series that almost everyone loves – Evil Dead? If you haven’t forgotten, a woman gets raped by a tree in that movie (well, the first one anyway).
People get killed and mutilated. Since when is loss of life funny?
Well, since always. Anything can be funny if presented funny, but not to everyone and not at all times. And in different cultures and times, some things are acceptable to laugh at, while other stuff is off-limits. And then, some people decide it’s ok to laugh at some bad stuff, but not other bad stuff, and consider themselves saintly for following that rule, and consider you evil for not doing the same.
Unless someone swears to never laugh at anything bad, ever, they are not living up to their own morality. And taking everything so seriously makes us humorless and depressed, and not fun to be around. Therefore, holding onto such a moral is a very weak, self-handicapping principle in the first place.
Moral crusaders might say :“ROK authors are mean-spirited assholes with crooked morals. Monty Python are good-natured guys, and it makes all the difference”. Yes, ROK has a lot more negativity to it than Monty Python. However, it doesn’t change anything. You don’t have to agree with ROK’s negativity or their opinions to laugh at their jokes. I can, for example, laugh at jokes of a hardcore leftist just as much, if they are funny to me.
However, it’s unlikely that feminist or leftist jokes would be funny to me. This is not a hypocrisy. It is a preference. Some jokes would not be funny to me because they feel repellent to me, and some are not funny because they are based on assumptions I don’t accept. I won’t laugh at them, but I won’t try to use LOGIC to prove why “it’s not funny” or call for the heads of those who made the jokes.
And therefore I conclude that you can laugh at rape, murder, eating disorders, dead pets and anything else you find funny without turning evil.
Of course, there is such a thing as tact, appropriate place/time to make a joke, and inappropriate place/time. But that’s a whole another topic.
|
OPCFW_CODE
|
Journal Entry Week 2 (due Sep 21) - fai-wasimon/r4r_lab_notebook Wiki
FOSS program -- Sept 15, 2022
Lecture notes and some cool resources:
Open Science Pillars: Open Access Publications, Open Data, Open Methodology, Open Educational Resources, Open Sources Software, Open Peer Review
- Open Access Publications: when you publish research, it's accessible at anyone at no cost
- Do we have funding to pay journals to have your publication being open access?
- Oh look: https://new.library.arizona.edu/about/awards/oa-fund
- Open Peer Review is interesting -- per Erika, some public health journals are cool in that you can request open peer review. In poli sci, do we have this? I only know of double-blind process but I can see how double-blind during the review process but open identities+comments after can keep both reviewers and authors more accountable and act as an incentive for reviewers to give more useful comments as well.
- More info on Open Peer Review https://open-science-training-handbook.github.io/Open-Science-Training-Handbook_EN/02OpenScienceBasics/08OpenPeerReviewMetricsAndEvaluation.html
- Open Methodology -- (per Sep 20 session discussion) can post questions and methodology before so you don't just tinker with data until we get significant results, but where can we post/publish this? This seems like one of the more challenging pillars to implement, but maybe in terms of changing norms, not in terms of technical difficulties.
- Communications with public -- inspired by Gift's comment: if our work is more technical, include in funding fees for people who are experts in science communication to help communicate with the public as well
- Is there a way (or are there existing ways) we can get people credits for publishing methodology or some data along the way? This might give people more incentives for Open Methodology + Open Data and not having to wait to increase your h-index for example.
Potential project for FOSS could be: creating website so I can just direct people to one place when I present to the dept
- principles/conceptual (e.g. FAIR + CARE) -- not a lot is needed here I don't think. The field is leaning toward Open Science already. But having a place that people can go to for reference/resource that is more organized and more customized toward the field might be good.
- technical -- people are interested in containers -- want to present this -- seems relevant to the field with collaborators who use different softwares or version of softwares
Some reflections on Open Science in general: Open Science right now focuses on Open Science after a research question is asked. Maybe a new version of Open Science can also reflect on who gets the access to education to get to ask questions and make science in the first place -- which is not that far from this current version -- like the open educational resource/open data part can lead to more capable people producing knowledge, reducing barrier to knowledge creation and dissemination.
|
OPCFW_CODE
|
Participants are allowed to investigate different issues and analyze variables, or set breakpoints. In a few moments you'll be connected and you can start collaborating. Host: Sharing Read-only Sharing a read-only collaboration session. Note that, given the level of access Live Share sessions can provide to guests, you should only share with people you trust and think through the implications of what you are sharing. To provide guests with full access to your solution, without actually requiring you to upload any of your code, Live Share communicates only the file system structure of your project to others. You will be on a call sharing your screen when it occurs to you that you could be using Live Share. When a host shares their code during a Live Share session,.
You can either edit together or independently meaning you can seamlessly switch between investigation, making small tweaks and full collaborative editing. Effective Teams Examine Root Causes One of the main reasons he left Microsoft, Haack said, was because of the endless meetings he had to endure, and he recounted the familiar problems with meetings that everyone in the crowd was familiar with. You can highlight a chunk of code to discuss and it goes directly into the chat so there is context for your comments. See information below for what this looks like. All of these things get in the way of smooth, effective collaboration. Follow her on Twitter: and see her Pluralsight courses at.
The extension is also available for download for and users. Unlike clicking the pin icon, this list appears even if there is only one other person in the session with you so you can always quickly see where someone else is located. Visual Studio Live Share Public Preview May 7, 2018 Amanda Silver, We are excited to announce the public preview of Visual Studio Live Share! A terminal window will appear telling you where the browser launcher will be installed. The views will show all the participants in your session. Security Tip: Want to understand the security implications of some of Live Share's features? And now, with tools like Enterprise Agile, you can manage work across projects and teams in a simple, productive environment. Debug actions, such as step or skip-over, are also relayed to collaborators.
Note even if you are unable to get browser integration working you can still. It also introduces a new deployment and extensibility model for global tools. If you want to disable this feature, update liveshare. The server on the port you specified will now be mapped to each guest's localhost on the same port unless that port was already occupied! For more information on how to share your projects securely, refer to the. Simply enter your password when prompted and press enter once the installation completes to close the terminal window. How dare you go behind our backs and try to sabotage the work that we're doing. Install Linux prerequisites Some distributions of Linux are missing libraries Live Share needs to function.
New sessions automatically create a corresponding chat channel that you can persist with the code, or dispose of when you are done. Any temp files are automatically cleaned up so no further action is needed. If you are in a shared debugging session and you step into a file that is in the. Live Share wants to further enhance that experience and offer new ways to work with your teammates. ~ The Visual Studio Live! New in Visual Studio Live Share 0.
Linux users: You may be prompted to enter a user code if you are using an older version of Live Share v0. With our innovative software and the support you need to make the most of it, you can design and create apps for any platform, manage application lifecycles, create modern reports with actionable insights, and more. These would be things like themes, icons, keyboard bindings, and so on. Haack organized his presentation under several main topics, and following are some of the highlights. This seems like the write get it? The shared terminal can be read-only or fully collaborative so both you and the guests can run commands and see the results. Guest: Joining Session Joining an existing collaboration session.
When you join a Live Share session, you get the full multi-file context of that project in your own familiar, personalized environment, with themes, keybindings, and customizations intact. Note: If more than one other person is in the collaboration session, you'll be asked to select the participant you want to follow. However, in some cases, you may find this behavior disruptive. Right-clicking the solution in Solution Explorer reveals a menu with options to add a file or folder, as well as modify, debug and launch settings. When guests join a session, the host is notified and can accept or reject that guest, as well as kick a guest off any time. Maybe it's time to try out feature flags: what they let you do, how it'll help your Scrum teams, and also show you how to actually implement them in your applications.
Note: If you have not yet installed the Live Share extension, you'll be presented with links to the extension marketplace. We even offer the Microsoft HoloLens Development Edition — a self-contained holographic computer that lets you interact with holograms in mixed reality. However, please note that this feature comes with a few language-related limitations. They can however, still add or remove breakpoints, and inspect variables. When opened in a browser, this link allows others to join a new collaboration session that shares contents of these folders with them. Real-time code reviews Another big area of collaboration among teams comes when committing your code and conducting reviews.
|
OPCFW_CODE
|
Minimum value for density, rod radius in AppPrism?
I was trying to make the whole model stiffer such that if I toss it, it won't collapse. I decided to reduce the rod radius, since it was 0.31 m (way too large for my model) to around 0.031 m (pretty reasonable). Turns out, upon compilation, the model was not visible at all. It does it have something to do with double precision and all? Reducing the density by a factor 10 also had the same effect.
Here are my input parameters for a working model:
0.2, // density (mass / length^3)
0.31, // radius (length)
1000.0, // stiffness (mass / sec^2)
10.0, // damping (mass / sec)
5000.0, // pretension (mass * length / sec^2)
10.0, // triangle_length (length)
10.0, // triangle_height (length)
10.0, // prism_height (length)
Moreover, is the stiffness entered above that of the tension ed string or the rod? (it appeared to be that of the string) Then where do I set the stiffness of the rods? Or are they not compressible at all?
See this article on scaling:
http://www.bulletphysics.org/mediawiki-1.5.8/index.php?title=Scaling_The_World
Note that the default 3_prism is run at cm scale (Gravity 981 cm/sec^2).
This is changed at the app level. This is why there aren't specific units
in the comments.
The rods are assumed to be rigid (not compressible), therefore the the
stiffness only applies to the string.
Brian
On Mon, Sep 7, 2015 at 7:14 AM, chiku00<EMAIL_ADDRESS>wrote:
I was trying to make the whole model stiffer such that if I toss it, it
won't collapse. I decided to reduce the rod radius, since it was 0.31 m
(way too large for my model) to around 0.031 m (pretty reasonable). Turns
out, upon compilation, the model was not visible at all. It does it have
something to do with double precision and all? Reducing the density by a
factor 10 also had the same effect.
Here are my input parameters for a working model:
0.2, // density (mass / length^3)
0.31, // radius (length)
1000.0, // stiffness (mass / sec^2)
10.0, // damping (mass / sec)
5000.0, // pretension (mass * length / sec^2)
10.0, // triangle_length (length)
10.0, // triangle_height (length)
10.0, // prism_height (length)
Moreover, is the stiffness entered above that of the tension ed string or
the rod? (it appeared to be that of the string) Then where do I set the
stiffness of the rods? Or are they not compressible at all?
—
Reply to this email directly or view it on GitHub
https://github.com/NASA-Tensegrity-Robotics-Toolkit/NTRTsim/issues/169.
Leaving this open in case we want to adopt these default values.
I tried to decrease the value of density to 0.1 and there the model disappears from the screen. It would then seem that it is in our interests to use grams for mass instead of kilograms.
See the discussion about Scaling:
http://ntrt.perryb.ca/doxygen/scaling.html
On Sep 16, 2015, at 10:03 PM, chiku00<EMAIL_ADDRESS>wrote:
I tried to decrease the value of density to 0.1 and there the model disappears from the screen. It would then seem that it is in our interests to use grams for mass instead of kilograms.
—
Reply to this email directly or view it on GitHub.
Vytas SunSpiral
Dynamic Tensegrity Robotics Lab
cell-<PHONE_NUMBER>
Office:<PHONE_NUMBER>
N269 Rm. 100
Stinger Ghaffarian Technologies
Intelligent Robotics Group
NASA Ames Research Center
I will not tiptoe cautiously through life only to
arrive safely at death.
|
GITHUB_ARCHIVE
|
How to use rel=canonical/rel=alternate when number of items per page on each site are different?
I have a website that shows photos accessible from a gallery section (that shows relevant photo thumbnails) divided up into pages.
On the desktop version of the site, the gallery section is divided into groups of 100 (as in 100 thumbnails per page).
On the mobile version, the same gallery section is divided into groups of 50 thumbnails in order to lower bandwidth costs for all.
Only other difference between the two versions of the pages are minor cosmetics, but the text is mostly the same (roughly 80+% duplicate content).
I'm just trying to figure out the best way to use canonical and alternate here.
I believe for the first 50 thumbnails, I can use rel=alternate and specify the same page number but on the mobile site and use rel=canonical and specify the desktop version of the page (that contains rel=alternate).
Now how do I handle thumbnails 51 to 100? do I specify on the mobile site via appropriate rel=canonicals that thumbnails 1 through 100 belong are associated with page 1 on the desktop site? or does rel=canonical only work if the thumbnail count is divided evenly on both sites (example: the dekstop site has 100 thumbnails per page and the mobile site has 100 thumbnails per page)?
You should not be using rel=canonical or rel=alternate on page 2+. Instead you should be using rel=prev and rel=next. That will allow search engines to associate the text on all of the pages in the pagination with page 1 and only rank the first page.
I think you've not misunderstood Mike's question. I think he is referring to example.com and m.example.com thingy.
I understand. He should use rel=prev and rel=next on page 2 for each, and only using a canonical tag on page 1 as he already knows how to do.
I am not sure if this is absolutely correct but this is what I think should be fine.
So for every 1 page in desktop version (D1), you've 2 pages in mobile version (M1,M2). This is what I will do -
D1 will have alternate tag pointing to M1.
M1 will have canonical tag to D1.
M2 will have canonical tag to D1.
Now, Why this is fine?
What you are essentially doing here is that you are specifying a View All page (D1) for all component pages (M1 and M2). https://support.google.com/webmasters/answer/1663744?hl=en
Apart from this, you can have usual rel=prev and rel=next within D1,D2,D3...and M1,M2,M3,M4,M5,M6...
This is probably what I will have to do. Now I'm gonna have to update my pagination options from allowing users to choose either 100, 200, or 500 thumbnails per page to 100, 200 or 400 thumbnails per page because 200 can go into 400 evenly. This is the moment where I have to ring googles neck because now I have to regenerate the thumbnail sprite sheets. Luckily I have software that does it but at the same time, some guests might be inconvenienced. I'll see if I can find a better answer (prob not) before having to go this route.
|
STACK_EXCHANGE
|
"""
Non-UI specific utility functions
@author anshulrao
"""
import pandas as pd
import os
EMPLOYEE_PICKLE_FILE = "../data/employee.pkl"
PROJECT_PICKLE_FILE = "../data/project.pkl"
def remove_all_data():
"""
Removes all the employee and project data.
"""
if os.path.exists(EMPLOYEE_PICKLE_FILE):
os.remove(EMPLOYEE_PICKLE_FILE)
if os.path.exists(PROJECT_PICKLE_FILE):
os.remove(PROJECT_PICKLE_FILE)
def dump_details(category, data):
"""
Dump the data entered by the user (from the application)
as pickle files.
:param category: Tells if the data is for the project or employee
:param data: The main data entered by the user from the application.
"""
old_data = None
filename = EMPLOYEE_PICKLE_FILE if category == "employee" else PROJECT_PICKLE_FILE
if os.path.exists(filename):
old_data = pd.read_pickle(filename)
new_entry = pd.DataFrame(data, index=[0])
if old_data is not None:
pd.concat([old_data, new_entry]).to_pickle(filename)
else:
new_entry.to_pickle(filename)
def allot_projects():
"""
The primary function that allots the projects to the employees.
It generates a maximum match for a bipartite graph of employees and projects.
:return: A tuple having the allotments, count of employees allotted and
total project headcount (a project where two people need to work
will have a headcount ot two).
"""
allotments = []
try:
emp_data = pd.read_pickle(EMPLOYEE_PICKLE_FILE)
project_data = pd.read_pickle(PROJECT_PICKLE_FILE)
except IOError as e:
print("Either employee or project data is not present. No allocation done.")
return [], 0, 0
employees = []
for _, emp_row in emp_data.iterrows():
transposed = emp_row.T
transposed = transposed[transposed == 1]
skills = set(transposed.index)
employees.append(
{
'name': emp_row['name'],
'value': skills
}
)
projects = []
for _, project_row in project_data.iterrows():
n = int(project_row['emp_count'])
for i in range(n):
projects.append(
{
'absolute_name': project_row['name'],
'name': project_row['name'] + str(i),
'value': set(project_row[['domain', 'language', 'type']].values)
}
)
matrix = []
for e in employees:
row = []
for p in projects:
if len(e['value'].intersection(p['value'])) >= 2:
row.append(1)
else:
row.append(0)
matrix.append(row)
employee_count = len(employees)
project_count = len(projects)
# An array to keep track of the employees assigned to projects.
# The value of emp_project_match[i] is the employee number
# assigned to project i.
# If value = -1 indicates nobody is allocated that project.
emp_project_match = [-1] * project_count
def bipartite_matching(employee, match, seen):
"""
A recursive solution that returns true if a project mapping
for employee is possible.
:param employee: The employee for whom we are searching a project.
:param match: Stores the assigned employees to projects.
:param seen: An array to tell the projects available to employee.
:return: `True` if match for employee is possible else `False`.
"""
# Try every project one by one.
for project in range(project_count):
# If employee is fit for the project and the project has not yet been
# checked by the employee.
if matrix[employee][project] and seen[project] is False:
# Mark the project as checked by employee.
seen[project] = True
# If project is not assigned to anyone or previously assigned to someone else
# (match[project]) but that employee could find an alternate project.
# Note that since the project has been seen by the employee above, it will
# not be available to match[project].
if match[project] == -1 or bipartite_matching(match[project], match, seen):
match[project] = employee
return True
return False
emp_allotted = 0
for emp in range(employee_count):
# Mark all projects as not seen for next applicant.
projects_seen = [False] * project_count
# Find if the employee can be assigned a project
if bipartite_matching(emp, emp_project_match, projects_seen):
emp_allotted += 1
for p, e in enumerate(emp_project_match):
if e != -1:
allotments.append((employees[e]['name'], projects[p]['absolute_name']))
return allotments, emp_allotted, project_count
|
STACK_EDU
|
Add, delete, and customize team settings.
Teams are used to categorize groups of users. You can use any standard to create user groups.
To create a team, click the Cog in the header. The Settings page will appear.
On the left side of the Settings page, click Teams below the USERS & SECURITY menu heading. This is where you manage teams in the TestMonitor environment.
The Teams overview page shows all teams in your TestMonitor environment, including their name, description, and the number of members.
Click the Cog near the list of teams to customize column display on the page.
Adding a team
To create a new team, click the Add Team button in the top right of the Teams overview page.
A pop-up box displays.
Fill in the following fields:
A short, clear team name. (e.g. company name, business unit, department or sprint team).
Click Save, or Cancel to go back to the Teams overview page.
Viewing team details
To view information about a team, open the Team Details page.
On the Teams overview page, click a name or the arrow button. The Team Details page displays.
You can access the following information:
The name of the team.
Shows users linked to the team.
All activities for each object in TestMonitor are logged.
Updating a team
You can change information on the Team Details page. Move the cursor over a field. A Pencil icon displays. Click a field and then edit the text.
Click Save, or Cancel to discard the changes.
You can add members to a team by clicking the Add Members button in the top right of the Teams detail page.
A pop-up box displays. You can assign one or more users to a team. Click Add to save the new members, or Cancel to go back to the previous page.
To delete a member, click the red cross next to the name of the member. A pop-up box displays. Click Delete, or Cancel to go back to the previous page.
Updating multiple teams
TestMonitor allows you to batch-edit teams.
On the Teams overview page, move the cursor over a team and check the box on the left. Select additional teams, if required. Select all by checking the box in the table header. Click on the green multi-select button in the toolbar and then select a batch action.
The batch-edit function offers the following action:
Click Delete to remove all the selected items.
Deleting a team
To remove a team, open its Details page and then click the three dots in the top right corner of the page.
Click Delete in the pop-up box and then Delete in the confirmation box.
Restoring trashed teams
Go to the Teams overview page and then toggle on the Trashed Teams filter.
Locate the item you want to restore and click the arrow button. Click Restore in the confirmation box. The team will now reappear on the Team overview page.
|
OPCFW_CODE
|
I have a GitHub App with few thousands users (https://github.com/apps/code-inspector). I face currently one major problem: when we get the access token, we sometimes cannot checkout the repository. Sometimes, we get the error "the repository XXX does not exist". But the repository does exist since it works on other, further attempts.
The token seems also invalid when I try to get the list of repository, I get an authentication error.
If that helps, I am using PyGitHub to get the token and interact with the API.
Any idea where it could come from?
Note that it does not seem related to the library I am using.
From time to time, when I try to clone the repository, I get the error Invalid username or password.\nfatal: Authentication failed for ....
I am using a command like this to clone the repository git clone https://x-access-token:<token-generated-from-github-app>@github.com/<full_name>.git
If I clone from another machine, I have no problem. And the clone will succeed on this machine if it tried few minutes after.
Is there a mechanism to avoid to checkout too often?
I'm having the exact same issue in my GitHub App https://github.com/apps/skeema-io , which is written in Golang. For the past few weeks, a portion of my git clone calls (using x-access-token exactly like you) are randomly failing with `remote: Invalid username or password.\nAuthentication failed for ...`
According to my logs, the problem started on the night of April 27 and has become more frequent over time, especially this past week.
This is a guess, but so far I believe the root cause is an internal technical issue on GitHub's side, specifically either database replication lag or cache inconsistency. My suspicion is that if you create a new access token and then immediately use it to clone a repo, the token is sometimes being checked against a db/cache that is lagging -- i.e. the INSERT corresponding to the access token's row has not yet replicated to the db/cache that is being queried to perform the auth check.
Today I added the following work-arounds to my application, and this seems to have solved the problem so far:
The idea is to just give the new access token time to replicate.
One additional mitigation measure that I'd suggest:
The idea with this one is to reduce the number of new access tokens you need, reducing the frequency of the entire situation.
Hope this helps! I'm surprised more people aren't hitting this!
Thanks @evanelias for the details! I implemented a similar strategy (wich caches) and it still faces problems. Sometimes, the same token is used and then, does not work for few minutes and works again. I also implemented a threshold mechanism there I do not checkout the same repository more than once per minute.
However, this is becoming problematic. Can a GitHub staff provides some insights here?
> Sometimes, the same token is used and then, does not work for few minutes and works again
Still sounds like DB replication lag to me :) If there are multiple replicas in a region, and one is lagging more than the others, it may just be random luck which replica is queried for any given auth check.
For context, the GitHub eng blog had a recent postmortem post indicating that they've been actively moving queries off of an overloaded master db onto replicas. And subsequently there was another outage in late April just a couple days before this error started coming up. Maybe unrelated, I'm just speculating here. But I could certainly understand if GitHub staff can't comment on it yet, if this is something they're still actively working on, e.g. the ongoing sharding efforts mentioned in the post-mortem post.
|
OPCFW_CODE
|
IgnoreReferenceMembersAttribute
Is your feature request related to a problem? Please describe.
I have complex models bound to a view ( WPF, MAUI or else) and I don't want destroy the bindings every time the user saves but I want to update the models with database or API generated values.
I came up with this solution:
namespace MapperCore
{
[Mapper]
public partial class UpdateMapper
{
public partial void Update(EmployeeUpdateDto dto, EmployeeModel model);
[MapperIgnoreTarget(nameof(OfficeModel.Boss))]
[MapperIgnoreTarget(nameof(OfficeModel.Employees))]
[MapperIgnoreSource(nameof(OfficeUpdateDto.Boss))]
[MapperIgnoreSource(nameof(OfficeUpdateDto.Employees))]
public partial void ShallowUpdate(OfficeUpdateDto dto, OfficeModel model);
public void DeepUpdate(OfficeUpdateDto dto, OfficeModel model)
{
ShallowUpdate(dto, model);
Update(dto.Boss, model.Boss);
UpdateList(dto.Employees, model.Employees, Update);
}
private void UpdateList<Ts, Td>(IList<Ts> dtos, IList<Td> models, Action<Ts, Td> updateMethod) where Ts : UpdateDto where Td : ModelBase
{
// Very basic example
if (dtos.Count == models.Count)
{
for (int i = 0; i < dtos.Count; i++)
{
updateMethod(dtos[i], models[i]);
}
}
else
{
throw new ArgumentException("Cannot update the model, the list count has changed");
}
}
}
}
Where:
namespace Models
{
public partial class ModelBase : ObservableObject
{
[ObservableProperty]
private DateTime created;
[ObservableProperty]
private int id;
[ObservableProperty]
private DateTime modified;
}
public partial class EmployeeModel : ModelBase
{
[ObservableProperty]
private string name;
[ObservableProperty]
private int scoredOrders;
}
public partial class OfficeModel : ModelBase
{
[ObservableProperty]
private string officeCode;
[ObservableProperty]
private EmployeeModel boss;
[ObservableProperty]
private List<EmployeeModel> employees;
}
public class UpdateDto
{
// Db created values
public int Id { get; set; }
public DateTime Created { get; set; }
public DateTime Modified { get; set; }
}
public class EmployeeUpdateDto : UpdateDto
{
// From very expensive query
public int ScoredOrders { get; set; }
}
public class OfficeUpdateDto : UpdateDto
{
public EmployeeUpdateDto Boss { get; set; }
public List<EmployeeUpdateDto> Employees { get; set; }
}
}
Describe the solution you'd like
I'd like to have an attribute similar to [IgnoreObsoleteMembers] that works for reference types and collections without the need to touch the assembly containing models.
With the proposed change I could write something like that:
//......
// If you have more properties to skip it gets annoying very quickly
//[MapperIgnoreTarget(nameof(OfficeModel.Boss))]
//[MapperIgnoreTarget(nameof(OfficeModel.Employees))]
//[MapperIgnoreSource(nameof(OfficeUpdateDto.Boss))]
//[MapperIgnoreSource(nameof(OfficeUpdateDto.Employees))]
[IgnoreReferenceMembers()] // New Attribute skips for me members that require "special" care
public partial void ShallowUpdate(OfficeUpdateDto dto, OfficeModel model);
// Reference Types and Collections are handled by custom code and I am fine with that
public void DeepUpdate(OfficeUpdateDto dto, OfficeModel model)
{
ShallowUpdate(dto, model);
Update(dto.Boss, model.Boss);
UpdateList(dto.Employees, model.Employees, Update);
}
//.......
Where:
[AttributeUsage(AttributeTargets.Method)]
internal class IgnoreReferenceMembersAttribute : Attribute
{
private readonly IgnoreReferenceTypesStrategy ignoreReferenceTypesStrategy;
private readonly IgnoreReferenceTypesOption ignoreReferenceTypesOption;
public IgnoreReferenceTypesStrategy IgnoreReferenceTypesStrategy => ignoreReferenceTypesStrategy;
public IgnoreReferenceTypesOption IgnoreReferenceTypesOption => ignoreReferenceTypesOption;
public IgnoreReferenceMembersAttribute(IgnoreReferenceTypesStrategy ignoreReferenceTypesStrategy = IgnoreReferenceTypesStrategy.Both, IgnoreReferenceTypesOption ignoreReferenceTypesOption = IgnoreReferenceTypesOption.AllCollectionsAndMutables)
{
this.ignoreReferenceTypesStrategy = ignoreReferenceTypesStrategy;
this.ignoreReferenceTypesOption = ignoreReferenceTypesOption;
}
}
public enum IgnoreReferenceTypesStrategy
{
None,
Both,
Source,
Target,
}
[Flags]
public enum IgnoreReferenceTypesOption
{
None = 0,
/// <summary>
/// <<see cref="IEnumerable{T}"/>>where T is <see cref="MutableReferenceTypes"/>
/// </summary>
CollectionsOfMutableReferenceTypes,
/// <summary>
/// All classes except string and <see cref="Nullable{T}"/> where T:struct
/// </summary>
MutableReferenceTypes,
AllMutables = CollectionsOfMutableReferenceTypes | MutableReferenceTypes,
/// <summary>
/// string and <see cref="Nullable{T}"/> where T:struct
/// </summary>
ImmutableReferenceTypes,
/// <summary>
/// <<see cref="IEnumerable{T}"/>>where T is <see cref="ImmutableReferenceTypes"/>
/// </summary>
CollectionsOfImmutableReferenceTypes,
CollectionsOfValueTypes,
AllCollections = CollectionsOfMutableReferenceTypes | CollectionsOfImmutableReferenceTypes | CollectionsOfValueTypes,
AllCollectionsAndMutables = AllCollections | MutableReferenceTypes,
All = AllCollections | MutableReferenceTypes | ImmutableReferenceTypes
}
Describe alternatives you've considered
A first implementation could not feature the IgnoreReferenceTypesOption enum and simply skip all Collections and mutable reference types.
Another alternative could be to have two different attributes: IgnoreCollections and IgnoreMutableReferenceTypes.
I think for your problem a collection mapping type of Update or similar would be the solution. See also https://github.com/riok/mapperly/issues/665#issuecomment-2438555406.
The proposed API seems to be a workaround for the use case and adds a lot of complexity to Mapperly and its API interface. If you have a use case for the proposed API that can't be addressed with #665, feel free to comment and I'll consider reopening it.
Feel free to contribute to #665 😉
|
GITHUB_ARCHIVE
|
import fiona
import fiona.crs
from collections import defaultdict
from itertools import count
from shapely.geometry import LineString
import pyproj
wgs84_proj = pyproj.Proj(fiona.crs.from_epsg(4326))
def drop_z(coords):
return list(zip(*list(zip(*coords))[:2]))
def transform_coords(coords, source_proj, target_proj):
x, y = list(zip(*coords))
x, y = pyproj.transform(
source_proj,
target_proj,
x, y)
return list(zip(x, y))
def get_rounded_coords(nd_coords):
x,y = nd_coords
return int(round(x)), int(round(y))
def get_labeler():
c = count()
def next_label(c=c):
return next(c)
return defaultdict(next_label)
def snap_coords(coords, node_labeler):
coords = [p[:] for p in coords]
coords[0] = get_rounded_coords(coords[0])
coords[-1] = get_rounded_coords(coords[-1])
nda = node_labeler[coords[0]]
ndb = node_labeler[coords[-1]]
return nda, ndb, coords
def process_geometry(
rec, source_proj, target_proj,
node_labeler):
'''target_crs is for snapping, distance, etc'''
geom = rec['geometry']
assert geom['type'] == 'LineString'
coords = geom['coordinates']
coords = drop_z(coords)
coords = transform_coords(coords, source_proj, target_proj)
nda, ndb, coords = snap_coords(coords, node_labeler)
distance = LineString(coords).length
coords = transform_coords(coords, target_proj, wgs84_proj)
return nda, ndb, coords, distance
def process_record(
rec, source_proj, target_proj,
node_labeler,
dropper=None,
props_processor=None):
if dropper is None:
dropper = lambda rec: False
if dropper(rec):
return None
if props_processor is None:
props_processor = lambda rec: rec['properties']
props = props_processor(rec)
nda, ndb, coords, distance = process_geometry(
rec, source_proj, target_proj,
node_labeler)
props.update({
'distance': distance,
'nodea': nda,
'nodeb': ndb})
return coords, props
def get_processed_data(data_path, target_crs,
dropper=None, props_processor=None):
with fiona.open(data_path) as c:
source_crs = c.crs
source_proj = pyproj.Proj(source_crs)
target_proj = pyproj.Proj(target_crs)
node_labeler = get_labeler()
with fiona.open(data_path) as c:
for i, rec in enumerate(c):
to_yield = process_record(
rec, source_proj, target_proj,
node_labeler,
dropper, props_processor)
if to_yield is not None:
coords, props = to_yield
yield i, props, coords
|
STACK_EDU
|
Congrats on the job @koh.justin! Hoping to eventually follow those footsteps in Q1 of 2020.
@alex @Bruno Can’t wait for the new course as I’m very intrigued by ML topics and am taking time off for the next few months (until about Jan/Feb). Do you think from now until then is substantial time to get a good grasp on a topic/multiple topics? Granted I’ll be studying 20-40 hours a week.
I think Sahil has a better grasp of this than me.
@Sahil Can you help out Patrick?
Since I don’t know much about your educational backgrounds. I am assuming that you don’t have any prior data science knowledge and would require to learn from scratch. In that case, our Data Scientist In Python path which covers fundamental topics in ML, takes around 240 hours (rough estimation) to complete. So considering you are spending 20 hrs per week, you would be able to easily complete it before February 2020. I would recommend you to do as many projects as you can in March 2020. So that you can make sure that you still remember all those topics you have learned.
At this point, I would like to consider you to be somewhere between Beginner and Intermediate level in Machine Learning. Machine Learning is a really vast field in itself, so it won’t be easy for you to learn it completely in a few months. However, as of March 2020, you will be able to start doing Machine Learning Projects on your own. Your next journey should be to choose some of the most commonly used algorithms and try to learn more about it so that you are comfortable with choosing the right parameters to tune the algorithm (This can be achieved in an year or so). At this point, I will consider you an intermediate level. To get to the Advanced Level, you want to pick a specific field that you are interested in, the reason is that, you can either be an expert in one field or know little bit of every field in ML. At this point, you will be researching newly released algorithms in that field or you may even be creating an algorithm on your own. This is something that would take years and would be an ongoing process as technology keeps on evolving rapidly.
However, if you just want to learn ML so that you can get a job in ML, then I would recommend you to reach the intermediate level. This will help you to land in beginner level ML jobs.
Hope this helps
Thanks, @Sahil I’ll be keeping this in mind. And thanks @Mary for splitting this into a new topic.
Would you say the statistics portion that DQ is enough? Are there any other resources (books, etc) you’d be able to suggest?
|
OPCFW_CODE
|
Error linking: "undefined reference to __exp{2}_finite"
Using the AUR package ungoogled-chromium. If I'm not mistaken this is the repository for that, so I thought i'd better make an issue here instead.
[22797/22797] LINK ./chrome
FAILED: chrome
clang++ -Wl,--version-script=../../build/linux/chrome.map -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,-z,defs -Wl,--as-needed -fuse-ld=lld -Wl,--icf=all -Wl,--color-diagnostics -m64 -Wl,-O2 -Wl,--gc-sections -rdynamic -pie -Wl,--disable-new-dtags -Wl,-O2,--sort-common,--as-needed,-z,relro,-z,now -o "./chrome" -Wl,--start-group @"./chrome.rsp" -Wl,--end-group -latomic -ldl -lpthread -lrt -lX11 -lX11-xcb -lxcb -lXcomposite -lXcursor -lXdamage -lXext -lXfixes -lXi -lXrender -lXtst -lgmodule-2.0 -lglib-2.0 -lgobject-2.0 -lgthread-2.0 -ljsoncpp -licui18n -licuuc -licudata -lsmime3 -lnss3 -lnssutil3 -lplds4 -lplc4 -lnspr4 -lcups -lxml2 -lfontconfig -ldbus-1 -levent -lresolv -lgio-2.0 -lz -lwebpdemux -lwebpmux -lwebp -lfreetype -ljpeg -lexpat -lharfbuzz-subset -lharfbuzz -ldrm -lre2 -lXrandr -lpci -lXss -lasound -lpulse -lavcodec -lavformat -lavutil -lsnappy -lopus -latk-1.0 -latk-bridge-2.0 -lva -lpangocairo-1.0 -lpango-1.0 -lcairo -latspi -lFLAC -lminizip -lgtk-3 -lgdk-3 -lcairo-gobject -lgdk_pixbuf-2.0 -lxslt -llzma -lm -llcms2 -lopenjp2
ld.lld: error: /sbin/../lib64/gcc/x86_64-pc-linux-gnu/9.2.1/../../../../lib64/libopenjp2.so: undefined reference to __exp_finite
ld.lld: error: /sbin/../lib64/gcc/x86_64-pc-linux-gnu/9.2.1/../../../../lib64/libopenjp2.so: undefined reference to __exp2_finite
clang-9: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
==> ERROR: A failure occurred in build().
Aborting...
Error making: ungoogled-chromium
Still occouring with cdad0787f5749fed81e2afd4a48b7f0def86960d (80.0.3987.87-2)
@jstkdng Same problem as you mentioned before?
@wchen342
yeah, removing the use_openjpeg2 flag fixes the problem
I recall something that may help finding the cause of this problem. So when #59 was opened Actions ran a build on it and the build succeeded without a problem. However on other machines the build failed. It turns out that there is a bug in the workflow file and Actions is building with an unmerged version of this repo (i.e. without the pull request), which I think means some change between 79.0.3945.130 and 80.0.3987.106 in either this repo or the main ungoogled-chromium repo caused this error.
@braewoods @tangalbert919 Do you have any idea whether some changes during that period can cause this?
BTW this error still occurs with 80.0.3987.122.
Those are symbols that are provided by libm from glibc. It could be as simple as a problem with the ordering of the linked libraries. I would try manually linking it with a changed order and see if it helps. If it does then maybe a patch for the linking order could help.
I'm running into the same issue.
I've found something interesting. I tried manually linking them and discovered that it links successfully if we use GNU ld but produces this error if we use Clang's ld.lld. Thoughts on what we might make from this discovery?
That's interesting discovery. My knowledge in C++ is very limited but I think either it's a difference between the implementation of ld/lld, or very unlikely, a bug.
Another possibility is the Clang version. Maybe try using Clang-9?
Not sure where I would get clang 9 on Arch. It has clang 10 now.
I'll see what disabling lld in the build flags does. I only thought to try this because I used to run into weird issues that seemed to be an issue with the linker itself during my packaging days. I can't find anything to explain this odd behavior but the fact that GNU ld seems to work whereas lld does not makes me think it's not an issue with the system libraries themselves.
I searched on AUR but there are clang-9 and llvm-9 but no lld-9.
There are clang-7, llvm-7 and lld-7 though.
Actually what clang version is Ubuntu/Debian on? The default clang is pulled from source so maybe it is not too new but too old?
I agree this is more like a problem with lonker itself, but I don't think disabling lld will be a good idea in long run since it is the only linker that is supported upstream.
Probably so. I only considered it a short term solution. As for Debian the Sid and Focal branches have been using clang 10 for over a month and have not been having linking errors with lld. They even have use_system_libopenjpeg2=true in their build flags.
Now that sounds like a problem with the patches. I am searching through the debian repository and find some modifications that are not here, like the usage of -no-static-libstdc++. Will that matter?
No idea. Only one way to find out. I also thought concurrent_links=1 might play a part. I have known builds to sometimes have issues with concurrency.
Ok. I did a difference between the build flags and found these are present in Debian but not Arch:
concurrent_links=1 use_goma=false use_allocator="tcmalloc" use_ozone=false use_unofficial_version_number=false enable_vr=false optimize_webui=false v8_enable_backtrace=true host_cpu=x64
@wchen342 I saw you close an issue earlier. Do you have authority to merge PRs here?
Ok. I'll try building with clang9 once I get it built. So far our only solutions are to switch linkers or disable the failing library. Any other ideas?
I can merge PRs here yes. I am holding #72 for one more day and if you still can pinpoint the exact cause then I will merge it.
The flags other than concurrent_links=1 v8_enable_backtrace=true use_ozone=false should be unrelated.
Just go ahead. I don't think I'm going to find anything. It could just as easily be a difference in how Arch builds clang and llvm from Debian.
Why don't you guys look at how Arch's chromium handles the problem? They don't use the flag use_system_openjpeg2. Maybe since the beginning or just removed when it caused this problem.
Yet Debian is able to build with it just fine. That's the mystery we have right now.
I actually don't know why some of the use_system flags are there in the first place. They are there since I have first come here. @Eloston probably knows.
And yes if Debian builds fine with the same flag then there is definitely a problem with Arch and it's better to find it out then just cover it up.
@wchen342 Could you merge my PR? I just finished rebaseing it with master. This issue is technically solved now so should we move what we've learned to a new one so we can revisit this flag later? It might get fixed with the next clang release or so.
@wchen342
The PKGBUILD was based on some version of inox, but I think nobody thought about refactoring it. Actually, Arch used to do the same, but then started using the flags array in their PKGBUILD.
@braewoods
That may be so, that doesn't mean it has to be solved. If both ways will take to the same destination, choose the one that takes the least amount of effort.
@jstkdng I see. My guess of including these flags is because 1) to decrease package size 2) prevent upstream from providing malicious libs, which I think is reasonable to have.
@braewoods I have merged the PR. Feel free to open a new issue.
Moving discussion to #73.
|
GITHUB_ARCHIVE
|
Picture by Creator
In the event you’re accustomed to the unsupervised studying paradigm, you’d have come throughout dimensionality discount and the algorithms used for dimensionality discount such because the principal element evaluation (PCA). Datasets for machine studying sometimes comprise numerous options, however such high-dimensional function areas usually are not at all times useful.
Basically, all of the options are not equally necessary and there are specific options that account for a big share of variance within the dataset. Dimensionality discount algorithms goal to cut back the dimension of the function house to a fraction of the unique variety of dimensions. In doing so, the options with excessive variance are nonetheless retained—however are within the reworked function house. And principal element evaluation (PCA) is without doubt one of the hottest dimensionality discount algorithms.
On this tutorial, we’ll find out how principal element evaluation (PCA) works and methods to implement it utilizing the scikit-learn library.
Earlier than we go forward and implement principal element evaluation (PCA) in scikit-learn, it’s useful to know how PCA works.
As talked about, principal element evaluation is a dimensionality discount algorithm. That means it reduces the dimensionality of the function house. However how does it obtain this discount?
The motivation behind the algorithm is that there are specific options that seize a big share of variance within the authentic dataset. So it is necessary to seek out the instructions of most variance within the dataset. These instructions are known as principal parts. And PCA is actually a projection of the dataset onto the principal parts.
So how do we discover the principal parts?
If the options are all zero imply, then the covariance matrix is given by X.T X. Right here, X.T is the transpose of the matrix X. If the options usually are not all zero imply initially, we will subtract the imply of column i from every entry in that column and compute the covariance matrix. It’s easy to see that the covariance matrix is a sq. matrix of order num_features.
Picture by Creator
The primary ok principal parts are the eigenvectors comparable to the ok largest eigenvalues.
So the steps in PCA could be summarized as follows:
Picture by Creator
As a result of the covariance matrix is a symmetric and optimistic semi-definite, the eigendecomposition takes the next type:
X.T X = D Λ D.T
The place, D is the matrix of eigenvectors and Λ is a diagonal matrix of eigenvalues.
One other matrix factorization approach that can be utilized to compute principal parts is singular worth decomposition or SVD.
Singular worth decomposition (SVD) is outlined for all matrices. Given a matrix X, SVD of X provides: X = U Σ V.T. Right here, U, Σ, and V are the matrices of left singular vectors, singular values, and proper singular vectors, respectively. V.T. is the transpose of V.
So the SVD of the covariance matrix of X is given by:
Evaluating the equivalence of the 2 matrix decompositions:
We have now the next:
There are computationally environment friendly algorithms for calculating the SVD of a matrix. The scikit-learn implementation of PCA additionally makes use of SVD below the hood to compute the principal parts.
Now that we’ve realized the fundamentals of principal element evaluation, let’s proceed with the scikit-learn implementation of the identical.
Step 1 – Load the Dataset
To grasp methods to implement principal element evaluation, let’s use a easy dataset. On this tutorial, we’ll use the wine dataset obtainable as a part of scikit-learn’s datasets module.
Let’s begin by loading and preprocessing the dataset:
from sklearn import datasets wine_data = datasets.load_wine(as_frame=True) df = wine_data.information
It has 13 options and 178 data in all.
print(df.form) Output >> (178, 13)
print(df.information()) Output >>
RangeIndex: 178 entries, 0 to 177 Knowledge columns (complete 13 columns): # Column Non-Null Rely Dtype --- ------ -------------- ----- 0 alcohol 178 non-null float64 1 malic_acid 178 non-null float64 2 ash 178 non-null float64 3 alcalinity_of_ash 178 non-null float64 4 magnesium 178 non-null float64 5 total_phenols 178 non-null float64 6 flavanoids 178 non-null float64 7 nonflavanoid_phenols 178 non-null float64 8 proanthocyanins 178 non-null float64 9 color_intensity 178 non-null float64 10 hue 178 non-null float64 11 od280/od315_of_diluted_wines 178 non-null float64 12 proline 178 non-null float64 dtypes: float64(13) reminiscence utilization: 18.2 KB None
Step 2 – Preprocess the Dataset
As a subsequent step, let’s preprocess the dataset. The options are all on completely different scales. To convey all of them to a typical scale, we’ll use the
StandardScaler that transforms the options to have zero imply and unit variance:
from sklearn.preprocessing import StandardScaler std_scaler = StandardScaler() scaled_df = std_scaler.fit_transform(df)
Step 3 – Carry out PCA on the Preprocessed Dataset
To seek out the principal parts, we will use the PCA class from scikit-learn’s decomposition module.
Let’s instantiate a PCA object by passing within the variety of principal parts
n_components to the constructor.
The variety of principal parts is the variety of dimensions that you just’d like to cut back the function house to. Right here, we set the variety of parts to three.
from sklearn.decomposition import PCA pca = PCA(n_components=3) pca.fit_transform(scaled_df)
As a substitute of calling the
fit_transform() methodology, you can too name
match() adopted by the
Discover how the steps in principal element evaluation comparable to computing the covariance matrix, performing eigendecomposition or singular worth decomposition on the covariance matrix to get the principal parts have all been abstracted away after we use scikit-learn’s implementation of PCA.
Step 4 – Inspecting Some Helpful Attributes of the PCA Object
The PCA occasion
pca that we created has a number of helpful attributes that assist us perceive what’s going on below the hood.
components_ shops the instructions of most variance (the principal parts).
Output >> [[ 0.1443294 -0.24518758 -0.00205106 -0.23932041 0.14199204 0.39466085 0.4229343 -0.2985331 0.31342949 -0.0886167 0.29671456 0.37616741 0.28675223] [-0.48365155 -0.22493093 -0.31606881 0.0105905 -0.299634 -0.06503951 0.00335981 -0.02877949 -0.03930172 -0.52999567 0.27923515 0.16449619 -0.36490283] [-0.20738262 0.08901289 0.6262239 0.61208035 0.13075693 0.14617896 0.1506819 0.17036816 0.14945431 -0.13730621 0.08522192 0.16600459 -0.12674592]]
We talked about that the principal parts are instructions of most variance within the dataset. However how will we measure how a lot of the overall variance is captured within the variety of principal parts we simply selected?
explained_variance_ratio_ attribute captures the ratio of the overall variance every principal element captures. Sowe can sum up the ratios to get the overall variance within the chosen variety of parts.
Output >> 0.6652996889318527
Right here, we see that three principal parts seize over 66.5% of complete variance within the dataset.
Step 5 – Analyzing the Change in Defined Variance Ratio
We will attempt operating principal element evaluation by various the variety of parts
import numpy as np nums = np.arange(14)
var_ratio = for num in nums: pca = PCA(n_components=num) pca.match(scaled_df) var_ratio.append(np.sum(pca.explained_variance_ratio_))
To visualise the
explained_variance_ratio_ for the variety of parts, let’s plot the 2 portions as proven:
import matplotlib.pyplot as plt plt.determine(figsize=(4,2),dpi=150) plt.grid() plt.plot(nums,var_ratio,marker="o") plt.xlabel('n_components') plt.ylabel('Defined variance ratio') plt.title('n_components vs. Defined Variance Ratio')
Once we use all of the 13 parts, the
explained_variance_ratio_ is 1.0 indicating that we’ve captured 100% of the variance within the dataset.
On this instance, we see that with 6 principal parts, we’ll have the ability to seize greater than 80% of variance within the enter dataset.
I hope you’ve realized methods to carry out principal element evaluation utilizing built-in performance within the scikit-learn library. Subsequent, you’ll be able to attempt to implement PCA on a dataset of your selection. In the event you’re on the lookout for good datasets to work with, take a look at this checklist of web sites to seek out datasets on your information science initiatives.
Computational Linear Algebra, quick.ai
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embrace DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and low! At present, she’s engaged on studying and sharing her data with the developer group by authoring tutorials, how-to guides, opinion items, and extra.
|
OPCFW_CODE
|
Minting a large number of documents on Blockchain is achievable with the help of off-chain commitment and proof preparations.
The transaction costs of the protocol are low, especially when using L2 Blockchain solutions.
Can a large volume of documents be minted on-chain in real-time, with the support of off-chain computations?
Aleksandar Veljković, 3327, Minty - Massive off-chain minting with verifiable on-chain commitments
- Merkle tree - Data structure where the values of nodes (except leaf nodes) are determined as hashes of the values in their children nodes.
- Commitment - Value to which all future computations are bound
- Blockchain transaction - Transaction upon which the sent data is written on the blockchain
- Storing large documents on Blockchain is practically impossible.
- Real-time execution of Blockchain transactions is not guaranteed.
- Costs of writing only commitments of individual files in individual transactions or batches can skyrocket the costs when dealing with thousands of files.
- Signing thousands of individual commitments can be inefficient when the documents come frequently and in large volumes.
- Merkle tree root hash can represent a commitment of multiple files.
- Proof of knowledge for any leaf of a Merkle tree can be derived from the tree and verified in the smart contract.
- An exponential time complexity for generating Merkle trees can also be a limiting factor when dealing with tens of thousands of documents.
- Parallelization of Merkle tree construction can be achieved efficiently on multicore processors.
- Multiple Merkle trees can be joined into one single tree by generating a “cap” tree where the leaves of the new tree are roots of individual subtrees.
- Commitments of documents can be generated as a root hash of a document Merkle tree constructed in a parallel manner and submitted on Blockchain in semi-real-time.
- Documents can represent any digital document or token (NFTs, for example).
- Document issuers can generate tree leaves as hashes of documents and addresses of the associated owners and construct Merkle tree in parallel.
- For each document, issuers derive Merkle proofs and send them to the owners off-chain, while the Merkle tree roots are written on-chain as commitments.
- A document owner can provide data to the smart contract and Merkle proof to mint the document on-chain.
- The proposed protocol can construct root hash commitments and proofs of more than 1,000,000 individual documents in less than 1 minute on a quad-core CPU.
- Compared to signing individual documents, parallel Merkle tree generation and proof extraction efficiency achieve better results.
- Minting a large number of documents on Blockchain is achievable with the help of off-chain commitment and proof preparations.
- The transaction costs of the protocol are low, especially when using L2 Blockchain solutions.
- The protocol is named Minty.
- The proposed tree generation method could be further improved to run parallel tree generation using multiple processors or a cluster of computation servers.
- Further work will be oriented toward implementing practical solutions that utilize the Minty protocol.
- Minting a large number of documents on- and off-chain represents a primitive that could be applied in many new use cases.
- The presented protocol can be utilized for creating NFT markets with massive off-chain minting.
- The protocol can also be applied for minting large volumes of credentials, like ID cards, passports, licenses, etc.
|
OPCFW_CODE
|