Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Volta: Architecture Factoring and Refactoring
I attended the Strategic Architecture Forum (SAF) that was held in Redmond earlier this week. The event is a series of presentations and roundtables. Bill Gates held a great Q&A session where he revealed he was in the REST + WS-* camp amongst other things.
I attended a presentation on Architecture Refactoring from Dragos Manolescu. Erik Meijer recently published an article which sets the context for Architecture Factoring and Refactoring:
As the world is moving more and more towards the software as services model, we have to come up with practical solutions to build distributed systems that are approachable for normal programmers.
Dragos works at the LiveLabs which are in charge of exploring disruptive technologies (Listas, PhotoSynth, Seadragon, Deepfish). He graduated from University of Illinois at Urbana-Champain shortly after Bill Opdyke established the foundations of Code Refactoring. He also worked at ThoughtWorks with Martin Fowler. Code Refactoring has become a great success story and pretty much every IDE supports it to a certain degree.
Dragos explained some of the challenges inherent to Architecture Refactoring as compared to Code Refactoring which relies on a series of assumptions:
- Same application boundaries
- Same development platform
- Same Constraints
that are simply impossible to make in the realm of Architecture Refactoring. He is looking at getting around them. His starting point is MSIL and creating MSIL to MSIL transformations that enable to cross process and development platform boundaries while offering more choices in terms in scalability, availability or security even well after the code was written within a monolithic architecture.
His first goals are to enable the multi-tier architecture refactoring and the injection of boilerplate code while removing accidental complexity due to particular choice of architecture and extending the reach of the platform. He is reusing as much as possible:
- the .Net programming languages
- the .net libraries
- the development tools such as Visual Studio 2008
- patterns and idioms
Dragos gave us 3 demos involving tier splitting refactoring (more information can be found here). He started with a monolithic application and simply by adding a [RunAt(server)] statement to an operation, the compiler generated the corresponding service and the application invoked that service automatically without any further coding.
The showed us the cross-tier debugging capabilities of Visual Studio.
He also showed how Asynchronous method calls could be implemented just as elegantly without having explicitly use a .Net Delegates
He also showed that Architecture Refactoring was possible for web applications even though we would be using:
- Different languages
- Different libraries
- Different tools
- Different programming paradigm
Dragos recommended to watch Erik Meijer's presentation on Volta: Wrapping the Cloud with .Net
Volta is an evolving research project focused on exploring ways to innovate data-intensive programming models. Volta is currently exploring a lean-programming inspired toolkit for building web-based and mobile applications by stretching the .NET programming model to cover the Cloud.
|
OPCFW_CODE
|
This content has been marked as final. Show 5 replies
I'm running out of time to test InterConnect, and I may advise my customer to go for another integration tool if I'm not able to make a datatransfer.
First I tried with 10gAS, but with no success. The midtier database was a 9.0, and interconnect asked for a 9.2. So then we installed a 220.127.116.11 db, and interconnect and iStudio without an AS.
The trouble is that we don't really get any information on either metalink or any forum about this error-message from the adapter log.
Any suggestion would be appreciated!
I will try to help you. Some things you may wish to check. (Note: This is for my 9iAS build on WinXP - so if you are Unix the filenames maybe different :-) )
1. Status of the infrastructure. When starting the infrastructure, do it in this order. Check the logs files at each point (where applicable).
(a) Oracle Hub Database
(b) Oracle Listener
-- Check a connection using SQL*Plus
(c) Oracle Internet Directory
-- Check the entries in oidldapd01.log. It should say "OiD LDAP server started"
-- Check the entries in oidmon.log. It should say "Updating Process Table...exit run"
-- These logs are found in $ORACLE_HOME/ldap/log
(d) Integration Respository Service
-- Check resposlog.txt file. It should say "*** Initialization is complete and repository is ready ***"
-- This log is found in $ORACLE_HOME/oai/9.0.2/repository
(e) Oracle Adapters
-- Check the oailog.txt files (you know where they are)
When stopping the integration infrastructure, you should stop it in the reverse order - cleanly. Avoid using "kill -9"! You can end up with problems - especially with OiD not starting correctly.
Another thing to try is to change the "Hub Queue Name" from lowercase to uppercase. Much to my amusement, I've known the case of the Hub Queue Name to cause me similar headaches.
(Before you make any changes, you may want to back-up your repository first using oaiexport.)
Stop your adapters, and clear out any persistence files from the $ORACLE_HOME/oai/9.0.2/adapters/[your_adapter]/persistence. Although it is not recommended, you can delete everything in this directory if you are sure that your in-process messages can be erased forever.
In iStudio, click on the Deploy tab. Find your Subscribing adapter. Under Routing > Message Capability Matrix, right-click and Edit. Change the "Hub Queue Name" to Uppercase.
e.g. oai_hub_queue to OAI_HUB_QUEUE
Re-start your adapters.
Try resending the message again.
I hope this helps,
THANK YOU!!!!!!! I really appritiate taking time to help me out here. I'm working on your suggestion (I'm on windows 2000 server), and I'll post my results afterwards. Again: THANKS!!!!
I tried again, with using upper-case on the hub_queue name, but no luck. This time I got no error messages in the adapterlog (or any other log), but we believe that the package (PL/SQL from iStudio) was never executed.
Thanks for your time!! Any other suggestions?
OK. When you say that you didn't get any errors this time, did you see the actual transaction going through the adapters?
What is your logging level on the adapters set to? In the adapter.ini file, set the "agent_log_level=2".
When you say that "the package (PL/SQL from iStudio) was never executed", is that the source (publish) application or your target (subscribe) application.
Some other checks to make are these:-
Log on via SQL*Plus to your publishing database, connecting to the OAI user (usually OAI/OAI) and do:-
select count(*) from oai.aotable;
select count(*) from oai.messageobjecttable;
If there is the same number of records on there, and your publishing adapter hasn't pick them up, then run this:-
select APPLICATIONTYPE from oai.messageobjecttable;
This should be the same as your publishing adapter name.
Hint: in adapter.ini check the entry for "application= "
// Application (as created in iStudio) that this Adapter corresponds to.
If they are different, or if APPLICATIONTYPE is null, then this is caused because you need to specify the "srcAppName" when you call your publish procedure.
If you send me a mail to firstname.lastname@example.org I'll send you a complete worked, yet simple, DB Adapter to DB Adapter example and code which may help you.
ORA-06550: Error in Oracle InterConnect DB Adapter
|
OPCFW_CODE
|
On May 4, 1733, French mathematician, physicist, political scientist, and sailor Jean-Charles de Borda was born. De Borda noted for his studies of fluid mechanics and his development of instruments for navigation and geodesy, the study of the size and shape of the Earth. He is one of 72 scientists commemorated by plaques on the Eiffel tower.
Jean-Charles de Borda grew up in Dax, France as part of a noble family. With their military connections, several of his brothers pursued military careers. It is believed that Jean-Charles’ cousin Jacques-François had a great influence on him. The cousin, an enthusiast for mathematics and science himself, taught Jean-Charles de Borda from early age. Starting from the age of seven, Jean-Charles de Borda began studying Greek and Latin at the Collège des Barnabites at Dax and at the age of eleven and with the help of Jacques-François, Jean-Charles de Borda enrolled at the Jesuit college at La Flèche where he was able to focus more on mathematics and science, as well as military engineering and civil service. This paved the way for de Borda’s later career, at 15 years old, de Borda was appointed mathematician in the army.
In 1758, Jean-Charles de Borda enrolled at the École du Génie at Mézière and after completion proceeded his career at the as military engineer. During the 1760s and 1770s, Borda crossed the Atlantic several times while working as a scientist and for the military. For instance, he was actove in cartography and drew charts of the Azores and Canary Islands. In 1778, he took part in the military conflict against Britain during the American War of Independence. During 1781, Jean-Charles de Borda was put in charge of several vessels and in 1784, Borda was appointed France’s Inspector of Naval Shipbuilding.
During his career, Borda developed trigonometric tables, and studied fluid flow on ships, pumps, as well as scientific instruments. For instance, Borda improved the reflecting cicle initially invented by Tobias Mayer. The reflecting cicle preceded the sextant and was motivated by the need to create a superior surveying instruments. Jean-Charles de Borda further improved the repeating cicle which had been invented by his assistant Etienne Lenoir, it was used to measure the meridian arc from Dunkirk to Barcelona by Delambre and Méchain.
References and Further Reading:
- O’Connor, John J.; Robertson, Edmund F., “Jean Charles de Borda“, MacTutor History of Mathematics archive, University of St Andrews.
- Jean-Charles de Borda at Britannica Online
|
OPCFW_CODE
|
Should symbols like ⌘⌃⇧ be used to describe macOS keyboard shortcuts?
This answer made me think: https://stackoverflow.com/a/42078914/3939277
It uses Cmd, Shift, and Option to describe the modifier keys to hold in a keyboard shortcut.
For questions meant to be read on macOS and executed on macOS:
It is OK to use ⌘, ⇧, ⌥, ⌃, etc. to describe macOS modifier keys in questions and answers? These correspond with symbols in menus.
If so, is the use of these preferred? Are there any that should be avoided?
If not, why not? macOS users should be used to these after only a month or two of use.
What is on the physical keyboard? You should try and match that.
i mean, i guess? why not? are they more descriptive? what's the purpose of this?
@NathanOliver: Both are used, and dependent on the exact Mac keyboard. My laptop has ⌘/command and alt/option, but the latter is commonly referred to as ⌥ as well.
@MartijnPieters What is more common though? alt/option or ⌥? I don't really use macs but I've at least seen alt/option and not ⌥.
@NathanOliver: Mac documentation and menu indicators use the symbols, so personally I try to use those. I don't know what is 'common' on the keyboards, I haven't done a survey.
@MartijnPieters If that's what the menus and docs use then that sounds like a good idea.
This question reads as if it's asking permission to use mac specific keyboard keys in general, rather than asking which version of a specific key (that doesn't have a consistent name/symbol) should be used. If it really is asking about the latter, that could be better clarified in the question. If the former...that seems like it doesn't require an answer.
@Servy I don't know what you mean by either of those. If you think I have excluded some information, that's not really my style. I really just want to know the answers to those three questions I enumerated.
No love for Open Apple and Closed Apple?
@RobertColumbia Sure, if you're asking questions about keyboard shortcuts on an Apple ][
If you turn on the macOS 'keyboard viewer', then the graphic symbols are used on the keyboard layout that it shows. For clarity, use both symbols and text if you are energetic enough to find the symbols in the 'Emoji and Symbols' tool, but using just the text is OK. (It may be useful to note that: ⌘ is U+2318 PLACE OF INTEREST SIGN; ⇧ is U+21E7 UPWARDS WHITE ARROW; ⌥ is U+2325 OPTION KEY; ⌃ is U+2303 UP ARROWHEAD.)
One downside to using symbols instead of spelling out the keys is that such a question or answer would be less likely picked up by search engines. Since a lot of the website's traffic comes from them it may be worth keeping in mind.
Are you asking if it is just OK to use them in some answer/post, or are you asking if there should be an effort for you, or others, to go through posts changing them to one way, or another?
This is attracting some opinion-based answers.
They're more awkward to type out for the small subset of PC users who also know the Mac shortcuts, but I personally have no qualms about those symbols being used. They're already fairly ubiquitous.
Ironically, there's no way to type them directly from a Mac keyboard. They're copy-paste material only. But I do like them better.
Yeah, that strange irony always made me smirk. I just put them into my favorites in the quick-character thingie that comes up with ⌃ ⌘ space
The symbols ⌥ and ⌃ are not shown my (modern) MacBook Pro keyboard, while "control", "option" and "command" are, so I would prefer the text versions.
I've been using a Mac for years. Those symbols do not appear on the keyboard, and I still find I have to look them up when I find a tutorial that uses them. Would be much more meaningful to use Command, etc.
They're still used but only in certain regions.
Apple documentation makes systematic use of these symbols. It rarely spells them out as command, option, control or shift (and when it does, it's always lowercase).
My opinion is that we should follow Apple usage.
(And in any case, "Cmd" is way more obscure to a Mac user than ⌘. If you want to spell it out, use "command", not "Cmd".)
The first two symbols, the "Command" and "Shift" symbols are commonly know (Even my windows laptop has the shift icon on it). I wouldn't use the option/alt symbol or the ctrl symbol because they are less iconic and less well known.
|
STACK_EXCHANGE
|
This is the second follow up to the webinar: How to Automate HTML5 Testing. In the first blog: Test Automation for Beginners, I covered some very basic topics surrounding the test automation concepts introduced at the event. Webinar timing limitations required us to shorten the presentation and the Q/A session. Here are some additional questions and their answers:
1. How can I test offline browsing features supported with HTML5?
In the above video Nick explains how to disconnect the machine from the internet programmatically and how to restore the connection after the offline test. You can download the routine used in the demo here.
2. How to design test cases to verify content even before the page has loaded completely? Do you have any suggestions?
Nick: You may want to take a look at the working with scripts from the hyperlink that I just pointed out because that's going to give you some insight into working with the client side or scripting events. So if you've got AJAX calls or whatever that are taking place, you can actually call into those AJAX methods and make sure that the page is ready to go. Or if you want to wait until a particular event has fired, you can do that as well.
Nick: Flash and Flex and Silverlight are all dedicated video players. And there's a lot of history behind those and a lot of functionality behind that HTML5 is implementing, but doesn't have completely implemented yet. <video> tag use in the webinar demo is a good example. The browsers haven't even finalized which video format they're going to work with. So there's some discrepancies there that you're going to need to work with.
4. How can we test content under the <canvas> object?
Nick: Canvas object is just a picture and TestComplete has the capability to validateimages inside an application. There's a region checkpoint which allows you to verify that a picture is displaying properly.
5. Does TestComplete support all the HTML5 tags, and how does that relate to browsers?
Nick: TestComplete sees the HTML5 tags and attributes just as they display inside the browsers themselves. So for those browsers like Chrome or Firefox that support the newstuff completely or to an extent, all the tags that those browsers support, TestComplete will be able to see within those browsers.
So think about the example we looked at earlier with the number input field. Firefox couldn't see it as number input. Firefox saw it as a text box. That's also how TestComplete registered it when Firefox is used for test recording. But to TestComplete in Chrome, it looked like a number input field. So we're going to see the objects exactly as they render in the browsers and identify them accordingly.
6. Can we run our tests in a virtual environment?
Nick: Sure. So TestComplete and TestExecute can both run in virtual environments. I was using VMware for all the demos in the webinar. In the past I've also successfully used Virtual PC and Virtual and we've got other customers who use Hyper-V for their virtualization as well.
Thank you again for joining us!
|
OPCFW_CODE
|
Android: Launch mode 'single instance'
I was going through the documentation for single instance and was trying out few samples.
In one of my sample I have three activities: A->B->C ,
where B has android:launchMode="singleInstance" in manifest. Activity A and C has default launch mode.
Scenario:
After navigating to C via A and B (i.e A->B->C), back button press from Activity C takes to Activity A (C->A), but back button press from Activity A does not quit the app, rather brings the Activity B to front, then back button press from Activity B quits the app.
Question
Why does Activity B comes to foreground when back button is pressed from Activity A?
Other scenario:
Similarliy, from Activity C if user presses device Home button, and come back the app by long home press, C stays in foreground. But back button press flow takes C-> A -> quits the app. This time Activity B does not come to foreground.
After navigating from A->B you have 2 tasks: The first one contains A, the second one contains B. B is on top and A is below that.
Now when navigating from B->C Android launches activity C into the task containing A (it cannot launch it into the task containing B because B is defined as "singleInstance", so it launches it into a task that has the same "taskAffinity", in this case the task containing A). To do that, Android brings the task containing A to the front. Now you have 2 tasks: The task containing A and C in the front, and the second one containing B below that.
Now you press the BACK key, which finishes activity C and returns to the activity below that in the task, namely A. You still have 2 tasks: The one containing A in the front, and the one containing B below that.
Now you press the BACK key again. This finishes activity A (and thereby finishes the task that held A) and brings the previous task in the task stack to the front, namely the task containing B. You now have 1 task: the task containing B.
In your other scenario, after navigating from A->B->C, you start with 2 tasks: The task containing A and C in the front, and the second one containing B below that.
Now you press the HOME button. You now say that you "come back to the app by long press". This isn't exactly correct. You can't "come back to the app". You can only "return to a task". But you've got 2 different tasks: If you do a long press you should see the 2 tasks. They probably have the same icon (unless you've provided a different icon for activity B) so you may not be able to tell them apart. If you select the task that contains A and C, then that task will be brought to the front with activity C on top. If you now press the BACK key, activity C will finish and the activity under it, activity A will be shown. If you now press the BACK key again, activity A will be finished and you will be returned to the HOME screen. The task containing B is still in the list of recent tasks, but it is no longer in the task stack under your other task because when you press the HOME button it is like going back to zero. You start all over again. You have no active tasks, so all tasks are in a row, they aren't in a stack and there is no hierarchy.
Also, in your question you use the phrase "quits the app". This also isn't quite correct. The only thing that a BACK button press does is to finish the current activity. If the current activity is the only activity in a task, it also finishes that task. However, it doesn't "quit the app". Especially in your case, since your "app" actually exists in 2 separate tasks.
Hopefully this is clear.
Thank you so much David..Your answer explains the concept very clearly, really helpful :) Just to add more detail, I had all these three activities A, B and C within same application. On long home press, I could not see two separate tasks, but only one.
Also with respect to the OTHER SCENARIO, " If you now press the BACK key again, activity A will be finished and you will be returned to the HOME screen. "
Q: why does it not take us task containing B, because task containing B should have been below Task containing A and C right? Please correct me if I am wrong.
Please post your manifest, then I can help you more. Just add it to your question as an edit.
Hey Pravy, Did your confusion solved at last? why does it not take us task containing B? I have the same confusion. If you know, please tell me. Thank you.
@CodeAlien I thought my explanation was pretty clear. If you are still confused, you should open another question.
@DavidWasser sorry for disturbing you in the comment. I encountered a problem like this: after using installer of android system installed my app, then I open my app directly from installer. It works well. But if I clicked the launcher of app to reenter it, problem happened. It's different from installing the app from android studio or eclipse. It start from the first activity every time. Why this happened? If I killed the app, and just clicked the icon to enter in not from installer. It works well too. It really confused me. Thank you for your help.
@CodeAlien Ah. That's a different problem. You are seeing this: http://stackoverflow.com/a/16447508/769265 It's an Android bug. So it looks like the problem is fixed if you launch from IDE, but still broken if you launch from installer.
@DavidWasser Thank you so much. I solved the problem by using the method you gave. I hope Google can fix this bug one day. :)
From the doc
"singleInstance" activities can only begin a task. They are always at the root of the activity stack. Moreover, the device can hold only one instance of the activity at a time — only one such task.
A "singleInstance" activity, on the other hand, permits no other activities to be part of its task. It's the only activity in the task. If it starts another activity, that activity is assigned to a different task
:Thanks for the reply,your first point makes it clear why it come to foreground again.
I need one more clarification regarding how the activity stack,various tasks and individual activities are related.
AS per my understanding from your reply,the OS maintains a stack called 'activity stack' which contains stack of tasks,
which inturn(i.e individual tasks) will contains activities.If any activity is created as single instance,
then it will be created as separate task,and that task will be placed at the bottom of the stack.
kindly,let me know if my understanding is correct.
@Pravy .. your understanding matches with mine :) .. .. if a activity starts normally ( not singleinstance or singletask) then it is just pushed to the stack
Well thanks.. but the same behaves differently when user had pressed device home button from activity C. I have updated it in the other scenario. May I know what could be the reason?
|
STACK_EXCHANGE
|
Proposal for a Desktop Neutral Crypto API
nielsen-list at memberwebs.com
Sat Apr 2 11:21:17 EEST 2005
Brad Hards wrote:
> Apart from the "remember what we did last time", I'm not sure what
> meant to provide in terms of additional functionality over what could be done
> with a shared library. Can you explain what you are trying to achieve by a
> crypto API? If I understood that, I might be able to make a more informed
Sure. I've outlined the main benefits over a shared library in a
previous email to the xdg list.
To sum it up:
- Desktop, license, implementation, coding style and implementation
- High level and simple API.
- Continuity in the user experience.
And for the details of the above:
> First look over:
> * why the choice of key types (openpgp and smime)?
This is a system for public key encryption, which is what users use when
communicating with others. OpenPGP (with it's keys, web of trust) and
S/MIME (it's underlying certificates, authorities) are the two main
methods of encrypting person to person communications.
> * are you trying to replace existing key agenst (eg for ssh or GPG)?
That's up to the implementation. Seahorse for example has
'seahorse-agent' which is a GNOME integrated GPG agent. But that has
nothing to do with this API per se.
> * what is the format for org.freedesktop.Crypto.Keys.ImportKeys and
That depends on the key type. PGP has a format (ASCII armor, and raw key
file) for distributing public keys as does S/MIME with it's PEM (and
otherwise) encoded certificates. This API would work in either case.
Although perhaps the arguments shouldn't be STRING, they could be
changed to a byte array for maximum flexibility.
> * how do you handle usage specific trust (eg I trust a certificate or key for
> a game server, but I wouldn't trust that certificate for my online banking)?
Good point, one that probably needs more thought involved. Any suggestions?
> * org.freedesktop.Crypto.TextOperations.EncryptText() and .DecryptText()
> appear to be pretty GPG centric. What if I want to encrypt with Blowfish, CBC
> mode, with a specific IV, PKCS7 padding?
Those are symetric encryption algorithms. This API centers itself around
public key encryption. The public key encryption (and certificates
etc...) is currently very confusing for users. This proposed API hopes
to bring a simplicity and continuity to this area.
> * same for TextOperations.signText and VerifyText. What if I just want to do
> HMAC using SHA256?
Again, think public key (assymetric, whatever) encryption. This API is
about users communicating with each other, and encrypting that
I guess if there's a demand for arbitrary symmetric encryption, then
this could be added. But this part seems to fit more into a low level
library (ie: openssl) rather than a high level API.
> * are you confident that DBUS is secure enough for this?
The whole point of this API is that nothing sensitive crosses the DBUS
API. It's all contained within the one implementing process and it's
underlying crypto engines.
Perhaps I'm missing something on the security though. Was there a
specific security flaw or short coming you've thought of?
More information about the xdg
|
OPCFW_CODE
|
We also give a chance to pick the strategy for payment and shipping and delivery, which happen to be probably the most easy for you.
their lecturers and fogeys. Kathy Schrock's Information for Educators - a labeled listing of websites over the internet found being beneficial for enhancing curriculum and teacher Experienced growth. Find My Tutor - "Choose My Tutor is often a British isles based platform that connects private tutors to pupils, be it for online or in residence tutoring." Shakespeare - The Complete Will work House Put, The - Area science is often a lot more pleasurable than Your kids ever imagined. On the Space Place. Introduced to us from the parents at NASA, Young children can learn how to generate and do "spacey issues." Or pick up some "awesome information" from Dr. Marc. StateMaster.com - statistical database which lets you exploration and Examine a large number of unique data on US states. Figures - How exact is polling? Learn about stats concepts with the circumstance research of the fictional election. Examine Guides and Techniques - Study Guides incorporates above one hundred webpages of summary guides to assist college students in succeeding within their studies. Sections contain learning and studying strategies, exam planning and using, classroom and project participation, reading through and producing capabilities, plus more. Translated into twenty five languages. Examine Suggestions and Analyze Techniques - "How to review is a giant query. All people would like to check properly so as to realize aims, get expertise, go examinations or get skills. The guidebook will display you each element of research capabilities, train you best research tips and help you find The easiest way to research. Discovering how to master is as critical as Finding out itself. Time is gold, so Allow’s start." Sunrise/Sunset Computation - Key in a city identify and find out occasions for dawn, sunset, and much more Thomas: Legislative Info on the net Prime twenty five Examining & Creating Assets for English Buffs - "Regardless of whether you're a serial novelist, informal blogger, otherwise you the same as to flip by magazines at the dentist's office, your lifetime you can try this out wouldn't be a similar with no countless several years of enhancement and high-quality tuning that have designed English A really global language.
4. Commence volunteering! This list ranges from small projects which you can complete all on your own in several several hours, to much larger projects that can consider additional time and other people. If you discover a project you can start on your own, get it done!
The goal of Project Help is to facilitate early claims resolution, and to not only get Rewards to wounded staff immediately, and also "assure they obtain the health care treatment method they have to return to wellness and work."
The timer appears at the bottom with the window to Permit you know how A great deal time is remaining. Your function is saved and submitted routinely when time is up.
Insert read the full info here on the dialogue. If the teacher enabled conversations, find the Open up class conversation icon. Anyone could make a contribution to the assignment conversation, together with your teacher.
Formulation get longer, difficulties get extra intricate, homework will get more time-consuming, and this is where you start seeking help with arithmetic homework. Which kind of help can you obtain navigate to this website in this article? Properly, we offer all kind of math guidance, from project producing to algebra and geometry trouble resolving, and from details Evaluation to advanced figures help. You can also get help from one among our workers tutors for those who’re acquiring trouble understanding a particularly sophisticated subject matter that is needed for you to complete the homework on.
So, For those who have some issues with stats or want to take a strain off, never be reluctant to Speak to us whenever you would like.
Worry was especially obvious among highschool college students. Students that claimed worry from homework were visit their website being extra likely to be deprived of sleep.[fifteen]
would do my assignment. This website not like Other people responded very promptly and visit site this shocked me. Mainly because This is certainly what many of us want correct? Mr. Avinash helped me to have with the
apparent I used to be trapped in Center of my Java programming Assignment so I puzzled if a person could do my Homework. I researched so many Site and I loved a concept who
Not The solution You are looking for? Search other concerns tagged python numpy or talk to your own concern. questioned
Hey Mike. It all is determined by your prerequisite complexity and deadline. Don’t worry you will never at any time have any lousy experience right here.
If you're mulling your head more than coding homework that is you unable to finish then we've been the ideal human being to suit your needs.
|
OPCFW_CODE
|
Azure US Government OIDC
When using OIDC in azure government an error is thrown on login but still works successfully
Using OIDC authentication...
Error: undefined. Please make sure to give write permissions to id-token in the workflow.
/usr/bin/az cloud set -n azureusgovernment
WARNING: Switched active cloud to 'AzureUSGovernment'.
WARNING: Use 'az login' to log in to this cloud.
WARNING: Use 'az account set' to set the active subscription.
Done setting cloud: "azureusgovernment"
Login successful.
The workflow has the permissions set as well
permissions:
id-token: write
contents: read
There is an open PR on this issue https://github.com/Azure/login/pull/258. Waiting on approval of the PR since November
@jamesseiwert ideally it should throw an error saying Govt clouds are not supported. Can you share more details if you are using a forked version of the action where you are bypassing that condition. Please share your workflow yaml for better understanding. Coming to the support for Govt clouds we are following up and will get back to you with more info.
@BALAGA-GAYATRI sorry for the delay. The workflow is simple and the login appears to work just with the false positive error. Below is the workflow file we are doing and at the end we can do a simple print out of all resource groups.
`name: Test Workflow
on:
workflow_dispatch:
push:
permissions:
id-token: write
contents: read
jobs:
dev:
name: Dev
environment: dev
runs-on: [<>]
steps:
- name: Login to Azure US Gov Cloud
uses: azure/login@v1
with:
environment: "AzureUSGovernment"
client-id: ${{ vars.AZURE_CLIENT_ID }}
tenant-id: ${{ vars.AZURE_TENANT_ID }}
subscription-id: ${{ vars.AZURE_SUBSCRIPTION_ID }}
- name: List Resource Groups
run: |
az group list`
Can you please add the below permissions for OIDC token and check the logs once.
permissions:
id-token: write
contents: read
@BALAGA-GAYATRI we do have those permissions in the workflow
https://github.com/Azure/login/blob/master/src/main.ts#L116
The error is being thrown in this line while getting an id-token. Since we aren't handling the error correctly there, it's continuing the flow(not expected though). But login successful is still not expected here. Since the error was thrown before itself, our execution is not reaching this point to throw the error for Govt clouds. I need to look into more details to understand this better. Are you using this action in GitHub enterprise? If yes, make sure to check this out.
We are using this in both Github.com and Github Enterprise
not stale, Gov users matter! ❤️
Hello, can we have an update on this issue? We are trying to move to OIDC authentication as recommended by the DoD Reference Architecture for DevSecOps, but have run into this same issue. Noting the documentation has read that government cloud support is coming 'soon' but this issue itself is now months old, presumably outside the definition of 'soon.' Thanks 😃
@MoChilia - Any update you can provide?
Hi @jamesseiwert! I have submitted pr https://github.com/Azure/login/pull/321 to fix this issue. Once this pr is merged, we will plan a release for it so that the OIDC authentication for sovereign clouds will be supported.
Closing this issue for now. It has been solved by https://github.com/Azure/login/pull/321.
@MoChilia this change does not seem to address powershell login as mentioned in https://github.com/Azure/login/issues/248.
Specifically:
"Error": "AADSTS900382: Confidential Client is not supported in Cross Cloud request
name: Test Azure powershell login with OIDC
on:
workflow_dispatch:
permissions:
id-token: write
contents: read
jobs:
test-oidc-login-ps
runs-on: ubuntu-latest
environment: Azure-Gov-Dev # valid environment
steps:
- name: OIDC Login to Azure
uses: azure/login@master # I built lib/main.js from master
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
environment: 'AzureUSGovernment'
enable-AzPSSession: true # works if this is not included
@danelson , could you open a new issue and provide this workflow & the debug log of your workflow?
Since this is a closed issue, we may miss information here. Thanks.
|
GITHUB_ARCHIVE
|
printing labeled images
The first example showed us the very basics of lua and allowed us to check that everything was working properly. Now let’s do something a little bit more complex. Let’s try to print a list of images that have a “red” label attached to them. But first of all, what is an image?
local darktable = require "darktable" local debug = require "darktable.debug" print(darktable.debug.dump(darktable.database))
Running the code above will produce a lot of output. We will look at it in a moment, but first let’s look at the code itself.
We know about
require darktable. Here, we need to separately
require darktable.debug which is an optional section of the API that provides helper functions to help debug lua scripts.
darktable.database is a table provided by the API that contains all images in the library database. Each entry in the database is an image object. Image objects are complex objects that allow you to manipulate your image in various ways (all documented in the
types_dt_lua_image_t section of the API manual). To display our images, we use
darktable.debug.dump which is a function that will take anything as its parameter and recursively dump its content. Since images are complex objects that indirectly reference other complex objects, the resulting output is huge. Below is a cut down example of the output.
toplevel (userdata,dt_lua_image_t) : /images/100.JPG publisher (string) : "" path (string) : "/images" move (function) exif_aperture (number) : 2.7999999523163 rights (string) : "" make_group_leader (function) exif_crop (number) : 0 duplicate_index (number) : 0 is_raw (boolean) : false exif_iso (number) : 200 is_ldr (boolean) : true rating (number) : 1 description (string) : "" red (boolean) : false get_tags (function) duplicate (function) creator (string) : "" latitude (nil) blue (boolean) : false exif_datetime_taken (string) : "2014:04:27 14:10:27" exif_maker (string) : "Panasonic" drop_cache (function) title (string) : "" reset (function) create_style (function) apply_style (function) film (userdata,dt_lua_film_t) : /images 1 (userdata,dt_lua_image_t): .toplevel [......] exif_exposure (number) : 0.0062500000931323 exif_lens (string) : "" detach_tag (function): toplevel.film.2.detach_tag exif_focal_length (number) : 4.5 get_group_members (function): toplevel.film.2.get_group_members id (number) : 1 group_with (function): toplevel.film.2.group_with delete (function): toplevel.film.2.delete purple (boolean) : false is_hdr (boolean) : false exif_model (string) : "DMC-FZ200" green (boolean) : false yellow (boolean) : false longitude (nil) filename (string) : "100.JPG" width (number) : 945 attach_tag (function): toplevel.film.2.attach_tag exif_focus_distance (number) : 0 height (number) : 648 local_copy (boolean) : false copy (function): toplevel.film.2.copy group_leader (userdata,dt_lua_image_t): .toplevel
As we can see, an image has a large number of fields that provide all sort of information about it. Here, we are interested in the “red” label. This field is a boolean, and the documentation tells us that it can be written. We now just need to find all images with that field and print them out:
darktable = require "darktable" for _,v in ipairs(darktable.database) do if v.red then print(tostring(v)) end end
This code should be quite simple to understand at this point, but it contains a few interesting aspects of lua that are worth highlighting:
ipairsis a standard lua function that will iterate through all numeric indices of a table. We use it here because darktable’s database has non-numeric indices which are functions to manipulate the database itself (adding or deleting images, for example).
Iterating through a table will return both the key and the value used. It is conventional in lua to use a variable named “
_” to store values that we don’t care about.
Note that we use the standard lua function
tostringhere and not the darktable-specific
darktable.debug.dump. The standard function will return a name for the object whereas the debug function will print the content. The debug function would be too verbose here. Once again, it is a great debug tool but it should not be used for anything else.
- German (translation incomplete): Markierte Bilder drucken
- Polish (translation incomplete): drukowanie oznakowanych zdjęć
- Português (translation incomplete): imprimir imagens com etiqueta
- Ukrainian (translation incomplete): друк зображень із міткою
- Dutch (translation incomplete): gelabelde afbeeldingen afdrukken
|
OPCFW_CODE
|
Visual studio adds blank lines when Notepad does not
I'm trying to open a file in an existing project in Notepad, and it is formatted normally:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace MyNamespace
{
public partial class MyForm : Form
{
#region Members
// etc...
But when I open it in Visual Studio it appears different, as though Visual Studio is adding new blank lines between every line:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace MyNamespace
{
public partial class MyForm : Form
{
#region Members
// etc...
Now one of my co-workers suggested that I check the checkbox under
Tools->Options->Environment->Documents->Check for consistent line
endings on load
but they were not sure what to do afterwards.
The code is very hard to read this way, and I'm afraid I'll mess the code in the repository if I check it in.
This is the case for both Visual Studio 2013 and Visual Studio 2015.
Can anyone help?
Thanks.
are you using git?
Use regular replace: (\r?\n){2,} to \n.
Thank you all for the responses. @salitio: we're using Perforce and Swarm for Reviews, not Git.
@Lei Yang: Thank you for the suggestion. I've tried this Regular Expression Replace, and it removed ALL blank lines, including the legitimate ones...
As the commenters have implied, it looks like you may be in a scenario where the file is using DOS/Windows-style line endings of a carriage return + line feed (\r\n), but Visual Studio thinks your file is in a UNIX/Linux format where only a line feed is needed, and is showing both. Notepad, on the other hand, always expects DOS/Windows-style line endings.
If this is the case, try changing this setting and reopening the file:
Tools => Options => Environments => Documents => Check for consistent line ending on load in VS2015.
Otherwise, try the regular expression provided above in the comments.
Thank you for the suggestions. I've tried both Windows and UNIX formats when the dialog would pop up, and neither of them made a difference. Further, when I try File->Advanced Save Options and change the Line endings combobox from "Current Setting" to "Windows (CR LF)" and hit OK, if I re-open the dialog the combobox reverts back to "Current Setting."
notpad only knows about \r\n while VS supports alot more https://msdn.microsoft.com/en-us/library/dd409797.aspx this means that the
file have one of the "line-break" characters that notepad isn't supporting
most likely it's a solo \n, try sublime regex replace and investigate what character is causing the problem.
|
STACK_EXCHANGE
|
import "./board.scss";
import React, { useEffect, useRef, useState } from "react";
import uuid from "react-uuid";
import Field from "components/field/Field";
import createBoard from "util/createBoard";
import revealFields from "util/revealFields";
function Board({ gameIsRunning, gameOverIsVictory, settings, flags, setFlags }) {
const { columns, rows, mines } = settings;
const ref = useRef(null);
const [boardMatrix, setBoardMatrix] = useState([]);
const [undetectedMinesRemaining, setUndetectedMinesRemaining] = useState(columns * rows);
useEffect(() => {
if (gameIsRunning) {
const boardMatrix = createBoard(settings);
setBoardMatrix(boardMatrix);
setFlags(0);
}
}, [settings, gameIsRunning, setFlags]);
useEffect(() => {
if (undetectedMinesRemaining <= 0 && flags >= mines) {
gameOverIsVictory(true);
}
}, [flags, gameOverIsVictory, mines, undetectedMinesRemaining]);
function updateSafeFieldsRemaining(boardMatrix) {
let undetectedMinesRemaining = columns * rows;
let flags = 0;
boardMatrix.forEach((column) => {
column.forEach((field) => {
if (field.isRevealed || field.isFlagged) {
undetectedMinesRemaining--;
}
if (field.isFlagged) {
flags++;
}
});
});
setFlags(flags);
setUndetectedMinesRemaining(undetectedMinesRemaining);
}
const revealField = (x, y) => {
if (boardMatrix[x][y].isFlagged || boardMatrix[x][y].isRevealed) return;
if (boardMatrix[x][y].value === "X") {
gameOverIsVictory(false);
}
const boardMatrixCopy = [...boardMatrix];
const updatedBoardMatrix = revealFields(boardMatrixCopy, x, y, settings);
updateSafeFieldsRemaining(updatedBoardMatrix);
setBoardMatrix(updatedBoardMatrix);
}
const toggleFlag = (e, x, y) => {
if (boardMatrix[x][y].isRevealed) return;
if (flags >= mines && !boardMatrix[x][y].isFlagged) return;
e.preventDefault();
let boardMatrixCopy = [...boardMatrix];
boardMatrixCopy[x][y].isFlagged = !boardMatrixCopy[x][y].isFlagged;
updateSafeFieldsRemaining(boardMatrixCopy);
setBoardMatrix(boardMatrixCopy);
}
const board = boardMatrix.map((column, index) => {
return (
<div
key={uuid()}
className={(index % 2) ? `column-even` : `column-odd`}
>
{column.map(field => {
return (
<Field
key={uuid()}
data={field}
toggleFlag={toggleFlag}
revealField={revealField}
/>
);
})}
</div>
);
});
return (
<div
id="board"
ref={ref}
onContextMenu={(e) => e.preventDefault()}
>
{board}
</div>
);
}
export default Board;
|
STACK_EDU
|
Find files with filename filter
I am using VB.net VS2012 and am having trouble with getting a list of files with a filter.
Here is my code:
Public Function SearchAndAddToListWithFilter(ByVal path As String, ByVal Recursive As Boolean, arrayListOfFilters As ArrayList, ByRef listOfFiles As List(Of FileInfo))
If Not Directory.Exists(path) Then Exit Function
Dim initDirInfo As New DirectoryInfo(path)
For Each oFileInfo In initDirInfo.GetFiles
Application.DoEvents()
For x = 0 To arrayListOfFilters.Count - 1
If (oFileInfo.Name Like arrayListOfFilters(x)) Then
listOfFiles.Add(oFileInfo)
End If
Next
Next
If Recursive Then
For Each oDirInfo In initDirInfo.GetDirectories
SearchAndAddToListWithFilter(oDirInfo.FullName, True, arrayListOfFilters, listOfFiles)
Next
End If
End Function
And here is an example of how to use it:
Dim stringFilterList As String = "*.mp3, *.docx, *.mp3, *.txt"
Dim arrayListOfFilenameFilters As New ArrayList(stringFilterList.Split(","))
Dim stringFolderPath As String = "C:\temp\folder\"
Dim booleanSearchSubFolders As Boolean = True
Dim listOfFilesFoundViaSearch As New List(Of FileInfo)
SearchAndAddToListWithFilter(stringFolderPath, booleanSearchSubFolders, arrayListOfFilenameFilters, listOfFilesFoundViaSearch)
For x = 0 To listOfFilesFoundViaSearch.Count - 1
MsgBox(listOfFilesFoundViaSearch(x).FullName)
Next
For some reason, the code only adds the files to the list that satisy the first condition in the list of filters.
Can I please have some help to get this code working?
Thank you.
Functions return values, and passing a value ByRef is NOT the way to do it.
The following function will work:
Private Function SearchAndAddToListWithFilter(ByVal path As String, ByVal filters As String(), ByVal searchSubFolders As Boolean) As List(Of IO.FileInfo)
If Not IO.Directory.Exists(path) Then
Throw New Exception("Path not found")
End If
Dim searchOptions As IO.SearchOption
If searchSubFolders Then
searchOptions = IO.SearchOption.AllDirectories
Else
searchOptions = IO.SearchOption.TopDirectoryOnly
End If
Return filters.SelectMany(Function(filter) New IO.DirectoryInfo(path).GetFiles(filter, searchOptions)).ToList
End Function
and to use this function:
Dim filters As String() = {"*.mp3", "*.docx", "*.bmp", "*.txt"}
Dim path As String = "C:\temp\folder\"
Dim foundFiles As List(Of IO.FileInfo) = SearchAndAddToListWithFilter(path, filters, True)
The solution provided by @Steve really shows the .NET way of doing the task.
However I used a recursive solution with possible definitions of maximum depth and/or duration. For completeness of this topic, I want to post the code:
''' <summary>
''' Search files in directory and subdirectories
''' </summary>
''' <param name="searchDir">Start Directory</param>
''' <param name="searchPattern">Search Pattern</param>
''' <param name="maxDepth">maximum depth; 0 for unlimited depth</param>
''' <param name="maxDurationMS">maximum duration; 0 for unlimited duration</param>
''' <returns>a list of filenames including the path</returns>
''' <remarks>
''' recursive use of Sub dirS
'''
'''<EMAIL_ADDRESS>''' </remarks>
Public Shared Function dirRecursively(searchDir As String, searchPattern As String, _
Optional maxDepth As Integer = 0, _
Optional maxDurationMS As Long = 0) As List(Of String)
Dim fileList As New List(Of String)
Dim depth As Integer = 0
Dim sw As New Stopwatch
dirS(searchDir, searchPattern, maxDepth, maxDurationMS, fileList, depth, sw)
Return fileList
End Function
''' <summary>
''' Recursive file search
''' </summary>
''' <param name="searchDir">Start Directory</param>
''' <param name="searchPattern">Search Pattern</param>
''' <param name="maxDepth">maximum depth; 0 for unlimited depth</param>
''' <param name="maxDurationMS">maximum duration; 0 for unlimited duration</param>
''' <param name="fileList">Filelist to append to</param>
''' <param name="depth">current depth</param>
''' <param name="sw">stopwatch</param>
''' <param name="quit">boolean value to quit early (at given depth or duration)</param>
''' <remarks>
'''<EMAIL_ADDRESS>''' </remarks>
Private Shared Sub dirS(searchDir As String, searchPattern As String, _
Optional maxDepth As Integer = 0, _
Optional maxDurationMS As Long = 0, _
Optional ByRef fileList As List(Of String) = Nothing, _
Optional ByRef depth As Integer = 0, _
Optional ByRef sw As Stopwatch = Nothing, _
Optional ByRef quit As Boolean = False)
If maxDurationMS > 0 Then
If depth = 0 Then
sw = New Stopwatch
sw.Start()
Else
If sw.ElapsedMilliseconds > maxDurationMS Then
quit = True
Exit Sub
End If
End If
End If
If maxDepth > 0 Then
If depth > maxDepth Then
quit = True
Exit Sub
End If
End If
' check if directory exists
If Not Directory.Exists(searchDir) Then
Exit Sub
End If
' find files
For Each myFile As String In Directory.GetFiles(searchDir, searchPattern)
fileList.Add(myFile)
Next
' recursively scan subdirectories
For Each myDir In Directory.GetDirectories(searchDir)
depth += 1
dirS(myDir, searchPattern, maxDepth, maxDurationMS, fileList, depth, sw, quit)
If quit Then Exit For
depth -= 1
Next
End Sub
ListView1.Items.Clear()
For Each files As String In System.IO.Directory.GetFiles(cmb_Drives.SelectedItem.ToString, txtSearch.Text)
Dim ico As Icon = System.Drawing.Icon.ExtractAssociatedIcon(files)
ImageList1.Images.Add(ico)
Dim list As ListViewItem = New ListViewItem(My.Computer.FileSystem.GetFileInfo(files).FullName, ImageList1.Images.Count - 1)
ListView1.Items.Add(list)
Next
try this the easiest way. buy the way dont mind imagelist is for the files icon only. first you need to do, get the Logical Drives from cmb_drives.selecteditems.Tostring() and the Start Letter of the File from textbox to fillter the file from ListviewSubItems.
you can edit your post instead of leaving a comment. welcome to stackoverflow!
|
STACK_EXCHANGE
|
Algorithms for efficient adjustment sets
Hello DoWhy team.
Congrats on the great work on this package! I wonder if you would be interested in a contribution to the package. First, a brief intro.
In a series of papers with co-authors (1, 2, and 3, the last one currently under review in the Journal of Causal Inference), we have developed theory and algorithms to compute efficient (meaning low variance) adjustment sets for estimating the average treatment effect of a treatment on an outcome under a non-parametric causal graphical model. Our results allow for hidden variables in the graph (as long as at least one adjustment set is comprised of observable variables), and the possibility of individualised treatments (in which the values of the intervention variable depend on some other set of variables).
More precisely, suppose we are given a causal graph G specifying:
a treatment variable A,
an outcome variable Y,
a set of observable (that is, non-latent) variables N,
a set of observable variables that will be used to allocate treatment L, and possibly
positive costs associated with each observable variable.
Suppose moreover that there exists at least one adjustment set with respect to A and Y in G that is comprised of observable variables. Consider the following definitions:
An optimal adjustment set is an observable adjustment set that yields non-parametric estimators of the interventional mean with the smallest asymptotic variance among those that are based on observable adjustment sets.
An optimal minimal adjustment set is an observable adjustment set that yields non-parametric estimators of the interventional mean with the smallest asymptotic variance among those that are based on observable minimal adjustment sets. An observable minimal adjustment set is a valid adjustment set such that all its variables are observable and the removal of any variable from it destroys validity.
An optimal minimum cost adjustment set is defined similarly, being optimal in the class of observable adjustment sets that have minimum possible cost.
Under these assumptions, we have shown that optimal minimal and optimal minimum cost adjustment sets always exist, and can be computed in polynomial time. We also provide a sufficient criterion for the existence of an optimal adjustment set and a polynomial time algorithm to compute it when it exists.
These results are not only valid for non-parametric graphs and estimators, but also by virtue of results in this paper, for linear structural equation models and OLS estimators.
We have implemented these algorithms in the optimaladj package, with routines from networkx doing most of the algorithmic heavy lifting. We believe they would be a nice addition to the DoWhy package.
Going back to my first point. Would you be interested in a PR that incorporates these algorithms into DoWhy? They could supplement the already implemented backdoor identification strategy.
Best,
Ezequiel Smucler
Hey, @esmucler thanks for reaching out. I'm familiar with your work and I think it will be a great addition to DoWhy.
Thinking in terms of user-facing API, here's a proposal: the identify_effect method already contains options to constrain the adjustment set returned. We have "exhaustive", "minimal-adjustment", "maximal-adjustment" and a "default" that is a heuristic mix of minimal and maximum. One option is to add more options here for the user, which may include, "optimal-adjustment", "optimal-minimal-adjustment" and "optimal-minimum-cost-adjustment". Would something like this work?
In terms of code structure, it may be easiest to add a new class for your method under causal_identifiers folder. Then you may need to modify identify_ate_effect method in CausalIdentifier class to add a call to your class.
In any case, feel free to raise a PR. You may also consider submitting a draft (incomplete) PR so that we can review the code structure and API before all the detailed code is added.
+1, I agree that this would be a great addition, Ezequiel!
Amit, where in the dowhy API would we add the costs associated with observed variables? Would we embed them within the graph structure, or add them as an extra argument in the identify_effect method? Any thoughts about what seems more natural, Ezequiel?
From: Amit Sharma @.>
Sent: Sunday, June 12, 2022 9:52 PM
To: py-why/dowhy @.>
Cc: Subscribed @.***>
Subject: Re: [py-why/dowhy] Algorithms for efficient adjustment sets (Issue #464)
Hey, @esmuclerhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fesmucler&data=05|01|emrek%40microsoft.com|0df55eab3fe241ab1ab708da4cf87ea4|72f988bf86f141af91ab2d7cd011db47|1|0|637906927407941926|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=%2FCJn775yQSYMiqttndpEjwAmxVKa%2Bwi5u2HoLhZC6mM%3D&reserved=0 thanks for reaching out. I'm familiar with your work and I think it will be a great addition to DoWhy.
Thinking in terms of user-facing API, here's a proposal: the identify_effect method already contains options to constrain the adjustment set returned. We have "exhaustive", "minimal-adjustment", "maximal-adjustment" and a "default" that is a heuristic mix of minimal and maximum. One option is to add more options here for the user, which may include, "optimal-adjustment", "optimal-minimal-adjustment" and "optimal-minimum-cost-adjustment". Would something like this work?
In terms of code structure, it may be easiest to add a new class for your method under causal_identifiers folder. Then you may need to modify identify_ate_effect method in CausalIdentifier class to add a call to your class.
In any case, feel free to raise a PR. You may also consider submitting a draft (incomplete) PR so that we can review the code structure and API before all the detailed code is added.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpy-why%2Fdowhy%2Fissues%2F464%23issuecomment-1153469533&data=05|01|emrek%40microsoft.com|0df55eab3fe241ab1ab708da4cf87ea4|72f988bf86f141af91ab2d7cd011db47|1|0|637906927407941926|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=gKVZvv%2BLu%2FPE83CkpKoFvQcZuB%2BfuXoOifu90KPcoz4%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FABNUPUBLIPXNEJUCVDZ5ZQ3VO25AFANCNFSM5YP5I6ZQ&data=05|01|emrek%40microsoft.com|0df55eab3fe241ab1ab708da4cf87ea4|72f988bf86f141af91ab2d7cd011db47|1|0|637906927407941926|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=uMUGtBmmCfUdC7%2FZEI%2B%2BeJ3vVQnpoT7atgrsMxD6kO0%3D&reserved=0.
You are receiving this because you are subscribed to this thread.Message ID<EMAIL_ADDRESS>
|
GITHUB_ARCHIVE
|
Welcome to the Advanced Rocketry(AR) advanced ore configuration readme!
This document will guide you through manually configuring ore for spawning on AR's various planets.
Default ore configurations are loaded from "./config/advRocketry/oreConfig.xml". This file can be used to specify which ores are generated on different planet types. If a type of planet is not specified, then it will use standard overworld generation.
There are two factors that determine the planet type: Atmosphere Pressure and Temperature. A list of each can be found below:
The "OreGen" defines a new type of planet to define the ore generation for. The "OreGen" tag contains "pressure" and "temp" attributes. These attributes specify the type of planet for which to define ore gen. Both attributes use integers corresponding to the temperature and pressure tables above and at least one of the tags must be present.
Defining only one of the tags will use the same configuration for all of the undefined tag's values. For example if I do not define the "temp" tag and define pressure to be 3, then I am defining ore generation for all planets with no atmosphere regardless of the surface temperature.
Planet type definitions are also read into the game in order, so if I define one type of oregen for all low pressure planets then farther down the file i define oregen for low pressure and high temperature planets, then low pressure high temperature planets will have a different oregen than other low pressure planets, however if I reverse the order then the entry for low pressure high temperature planets will be overwritten.
The "ore" tag specifies an entry for a type of ore to spawn. This tag has the following attributes:
block: the name or id of the block
meta: optional attribute to specify the meta value of the block
minHeight: minimum height at which to spawn the ore (between 1 and maxHeight)
maxHeight: maximum height at which to spawn the ore (between minHeight and 255)
clumpSize: amount of ores to generate in each clump
chancePerChunk: maximum number of clumps that can be spawned in a given chunk
All planets with no atmosphere will spawn large quantities of iron blocks except those with high temperature, which will instead spawn gold blocks
--- ./config/advancedRocketry/OreConfiguration.xml --- <OreConfig> <oreGen pressure="3"> <ore block="minecraft:iron_block" minHeight="20" maxHeight="80" clumpSize="32" chancePerChunk="64" /> </oreGen> <oreGen pressure="3" temp="5"> <ore block="minecraft:gold_block" minHeight="20" maxHeight="80" clumpSize="32" chancePerChunk="64" /> </oreGen> </OreConfig>
|
OPCFW_CODE
|
[Eventgrid] Improve error message when providing wrong authentication
Upon providing wrong key for authentication during publisher client, we get
UnboundLocalError: local variable 'authentication_policy' referenced before assignment on providing wrong credential in the publisher client
This message must improve - perhaps start with insisting on providing an AzureKeyCredential?
Can you paste a full example with the full stacktrace of this?
Can you paste a full example with the full stacktrace of this?
>>> e = EventGridPublisherClient("https://rakshith-eg.westus-1.eventgrid.azure.net/api/events", "dsf")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Administrator\Documents\Workspace\azure-sdk-for-python\envtrace\lib\site-packages\azure\eventgrid\_publisher_client.py", line 75, in __init__
policies=EventGridPublisherClient._policies(credential, **kwargs),
File "C:\Users\Administrator\Documents\Workspace\azure-sdk-for-python\envtrace\lib\site-packages\azure\eventgrid\_publisher_client.py", line 82, in _policies
auth_policy = _get_authentication_policy(credential)
File "C:\Users\Administrator\Documents\Workspace\azure-sdk-for-python\envtrace\lib\site-packages\azure\eventgrid\_helpers.py", line 87, in _get_authentication_policy
return authentication_policy
UnboundLocalError: local variable 'authentication_policy' referenced before assignment
Can you paste a full example with the full stacktrace of this?
>>> e = EventGridPublisherClient("https://rakshith-eg.westus-1.eventgrid.azure.net/api/events", "dsf")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Administrator\Documents\Workspace\azure-sdk-for-python\envtrace\lib\site-packages\azure\eventgrid\_publisher_client.py", line 75, in __init__
policies=EventGridPublisherClient._policies(credential, **kwargs),
File "C:\Users\Administrator\Documents\Workspace\azure-sdk-for-python\envtrace\lib\site-packages\azure\eventgrid\_publisher_client.py", line 82, in _policies
auth_policy = _get_authentication_policy(credential)
File "C:\Users\Administrator\Documents\Workspace\azure-sdk-for-python\envtrace\lib\site-packages\azure\eventgrid\_helpers.py", line 87, in _get_authentication_policy
return authentication_policy
UnboundLocalError: local variable 'authentication_policy' referenced before assignment
That's a bug in this code:
https://github.com/Azure/azure-sdk-for-python/blob/c2dea16a2d3b202dfa46a93b2bfadff44a020287/sdk/eventgrid/azure-eventgrid/azure/eventgrid/_helpers.py#L77-L87
If all if are wrong, we reach a return on an undeclared variable.
Correct code:
def _get_authentication_policy(credential):
if credential is None:
raise ValueError("Parameter 'self._credential' must not be None.")
if isinstance(credential, AzureKeyCredential):
return AzureKeyCredentialPolicy(credential=credential, name=constants.EVENTGRID_KEY_HEADER)
if isinstance(credential, EventGridSharedAccessSignatureCredential):
return EventGridSharedAccessSignatureCredentialPolicy(
credential=credential,
name=constants.EVENTGRID_TOKEN_HEADER
)
raise ValueError("The provided credential should be an instance of EventGridSharedAccessSignatureCredentialPolicy or AzureKeyCredentialPolicy")
That's a bug in this code:
https://github.com/Azure/azure-sdk-for-python/blob/c2dea16a2d3b202dfa46a93b2bfadff44a020287/sdk/eventgrid/azure-eventgrid/azure/eventgrid/_helpers.py#L77-L87
If all if are wrong, we reach a return on an undeclared variable.
Correct code:
def _get_authentication_policy(credential):
if credential is None:
raise ValueError("Parameter 'self._credential' must not be None.")
if isinstance(credential, AzureKeyCredential):
return AzureKeyCredentialPolicy(credential=credential, name=constants.EVENTGRID_KEY_HEADER)
if isinstance(credential, EventGridSharedAccessSignatureCredential):
return EventGridSharedAccessSignatureCredentialPolicy(
credential=credential,
name=constants.EVENTGRID_TOKEN_HEADER
)
raise ValueError("The provided credential should be an instance of EventGridSharedAccessSignatureCredentialPolicy or AzureKeyCredentialPolicy")
|
GITHUB_ARCHIVE
|
Value of threadidx.x (.y, .z), blockidx.x etc. in CUDA
I understand that I use threadidx.x etc. to reference a specific thread, but I am transferring code from a for loop in a CPU and would like to reference numbers 0...N using threadidx.x, but this doesn't seem to work. I declare tdx = threadIdx.x as an integer, but what integer is actually being stored in tdx?
As you can read in the documentation, the variables threadIdx, blockIdx and blockDim are variables that are created automatically on every execution thread. They have .x, .y and .z properties so that you can map threads to your problem space as you see fit.
When you execute the kernel, you determine how many threads each block will have (in 3D) and how many blocks there are in a 3D grid. In the following code:
dim3 threads(tX, tY, tZ);
dim3 blocks(gX, gY, gZ);
kernel_function<<<blocks, threads>>>(kernel_parameters);
You are launching the kernel function named kernel_function so that the CUDA runtime launches a 3D grid of blocks of dimensions gXxgYxgZ. Each of those blocks will contain threads organized in a 3D structure of size tXxtYxtZ.
If the size of the 3rd dimension is 0 (i.e. it is a 2D mapping), the picture of the official documentation shows it better:
What this means is that the following will be true for every thread executing your kernel:
blockDim.x = tX
blockDim.y = tY
blockDim.z = tZ
gridDim.x = gX
gridDim.y = gY
gridDim.z = gZ
And every thread will have its own coordinates within those parameters. Mathematically:
0 <= threadIdx.x < blockDim.x = tX
0 <= threadIdx.y < blockDim.y = tY
0 <= threadIdx.z < blockDim.z = tZ
0 <= blockIdx.x < gridDim.x = gX
0 <= blockIdx.y < gridDim.y = gY
0 <= blockIdx.z < gridDim.z = gZ
PS: this is a really basic CUDA question, I really recommend you to read the official CUDA Programming Guide
what is threadDim ?? You've got some of your built-in variables confused. The grid dimensions are gridDim.x,y,z in units of blocks. The block dimensions are blockDim.x,y,z in units of threads. Each block has a unique index blockIdx.x,y,z within the grid. The only thread built-in variable is threadIdx.x,y,z, which is the thread index, unique within the block.
You are correct with regards to the variable names and I have corrected them (note to self: should not respond to SO questions after midnight)
|
STACK_EXCHANGE
|
Adapted from a YouTube comment, which was adapted from a text file in my notes.
You wanna stan LOONA? Cool, you should. They're the best K-Pop group in the history of K-Pop, for a few reasons:
I'm not going to dive into all of those. Instead, this little guide will point you to resources that will help you discover them on your own.
First things first: as of the time I'm writing this, there are 343 MP3s in my LOONA discography directory, including 55 albums/singles/mini-albums. Don't worry about most of them just yet.
Watch these, in this order:
Yeah there's a boycott on. Don't worry about it, you're not in that deep.
Next, check out Sonatine, Pose, Sweet Crazy Love, Love4Eva, and anything else YouTube recommends to you, if you want to.
It's time for memes. Check out the most popular videos from GirlsOf OurMonth for starters. Also check out LOONIAN, Orbit Olight 2, Everyday Wizbit, Blackwhite, driftnorth, and other great LOONA meme editors, if you want. The point here is to learn their personalities.
More music. You need Universe, Where you at?, Pale Blue Dot, See Saw, Egoist, Let Me In, Eclipse, Chaotic, Stylish, and A Different Night. Maybe just start with a few and see how you're feeling. They have a LOT of music, so feel free to self-direct if you find something you vibe with. At this point it would make sense to start learning about the subunits and history of the group, at least a little bit.
Lore. Okay this is complicated because none of it is really official. I really recommend SuA's SinB, especially the Eclipse Theory series, but it's long and really complex. You could also try the Twinfish or Loominosity videos, or the lore guides on the wiki linked above.
Now also you should probably head to the subreddit and check the megathreads in the sidebar. Lots has been happening in the last year or so, so it's good to be aware of where members are now and what all is going on with CTD, ARTMS, the boycott, and all of it.
Jaden, BlockBerry, more memes (see above), more lore (see above), the whole debut project (see the wiki); if you're going this deep, you're probably already an Orbit, so I don't need to tell you where to go. It's a wild world here, down near the bottom of the iceberg.
Don't feel pressured to go deep. If you like some of the music, just enjoy it. If the memes are funny but you don't care about the lore, that's totally fine. Personally I've been in this LOONA life since there were only three members, so I've had time to slowly absorb all of this stuff. If I was getting into LOONA just now, I honestly probably would not bother learning the lore, or the company drama, or even about how Lee Soo-Man produced the only two albums outside of SM just because he was impressed with LOONA's Cherry Bomb cover. Just bite off what you can chew and don't let it overwhelm you.
Welcome to LOONA Island.
|
OPCFW_CODE
|
The following was removed from Chapter 14: “Connectivity”.
Configure a Wireless Network Interface
In the previous section, “View the Status of Your Wireless Network Interfaces,” you used
iwconfig to see important details about your wireless card and its connection. You can also use
iwconfig, however, to configure that wireless card and its connections. If this sounds like
ifconfig, it should, as
iwconfig was based on
ifconfig and its behaviors.
You can make several changes with
iwconfig, but you’re only going to look at a few (for more details, see
The network topologies associated with wired networks, such as star, bus, and ring, to name a few, have been known and understood for quite some time. Wireless networks introduce some new topologies to the mix, including the following:
- Managed (an access point creates a network to which wireless devices can connect; the most common topology for wireless networking)
- Ad-Hoc (two or more wireless devices form a network to work with each other)
- Master (the wireless device acts as an access point)
- Repeater (the wireless device forwards packets to other wireless devices)
There are others, but those are the main ones.
For more on stars, busses, rings, and the like, see Wikipedia’s “Network Topology” at http://en.wikipedia.org/wiki/Network_topology.
iwconfig, you can tell your wireless card that you want it to operate differently, in accordance with a new topology.
# iwconfig ath0 mode ad-hoc
After specifying the interface, simply use the mode option following the name of the mode you want to use (
ad-hoc in this case).
Remember that the card you’re using in these examples has an interface name of
ath0; yours might be
wlan0, or something else entirely. To find out your interface’s name, use
iwconfig by itself, as discussed in the previous section.
The Extended Service Set Identifier (ESSID) is the name of the wireless network to which you’re joined or you want to join. Most of the time an ESSID name of
any will work just fine, assuming that you can meet the network’s other needs, such as encryption, if that’s necessary. Some networks, however, require that you specify the exact ESSID.
# iwconfig ath0 essid lincoln
Here you are joining a wireless network with an ESSID of
lincoln. Simply use the
essid option, followed by the name of the ESSID, and you’re good.
More and more networks are using encryption to protect users’ communications from sniffers that capture all the traffic and then look through it for useful information. The simplest form of network encryption for wireless networks is Wired Equivalent Privacy (WEP). Although this provides a small measure of security, it’s just that: small. WEP is easily cracked by a knowledgeable attacker, and it has now been superseded by the much more robust Wi-Fi Protected Access (WPA). Unfortunately, getting WPA to work with wireless cards on Linux can be a real bear, and is beyond the scope of this book. Besides, WEP, despite its flaws, is still far more common, and it is better than nothing. Just don’t expect complete and total security using it.
For more on WEP and WPA, see Wikipedia’s “Wired Equivalent Privacy” (http://en.wikipedia.org/wiki/Wired_Equivalent_Privacy) and “Wi-Fi Protected Access” (http://en.wikipedia.org/wiki/Wi-Fi_Protected_Access). You can find information about getting WPA to work with your Linux distribution at “Linux WPA/WPA2/IEEE 802.1X Supplicant” (http://w1.fi/wpa_supplicant/). If you’re using Windows drivers via ndiswrapper, also be sure to check out “How to Use WPA with ndiswrapper” (http://sourceforge.net/apps/mediawiki/ndiswrapper/).
WEP works with a shared encryption key, a password that exists on both the wireless access point and your machine. The password can come in two forms: hex digits or plain text. It doesn’t really matter which because
iwconfig can handle both. If you’ve been given hex digits, simply follow the
enc option with the key.
# iwconfig ath0 enc 646c64586278742a6229742f4c
If you’ve instead been given plain text to use, you still use the
enc option, but you must preface the key with
s: to indicate that what follows is a text string.
# iwconfig ath0 enc s:dldXbxt*b)t/L
I created those WEP keys using the very nice WEP Key Generator found at http://www.andrewscompanies.com/tools/wep.asp.
If you have several options to change at one time, you probably would like to perform all of them with one command. To do so, follow
iwconfig with your device name, and then place any changes you want to make one after the other.
# iwconfig ath0 essid lincoln enc 646c64586278742a6229742f4c
The preceding listing changes the ESSID and sets WEP encryption using hex digits for the wireless device
ath0. You can set as many things at one time as you’d like.
|
OPCFW_CODE
|
from typing import Dict, List
from collections import defaultdict
from timeline.src.datamodel.ecb_plus.document import Document
from timeline.src.datamodel.ecb_plus.entities import Entity
from timeline.src.datamodel.ecb_plus.text import Sentence
from timeline.src.datamodel.ecb_plus.markable import Markable
class Context():
def __init__(self):
self._entities = {}
self._dataset_id = None
self._sentences = {}
self._document2sentences = defaultdict(lambda: [])
self._markables: Dict[str, Markable] = {}
self._document2markables = defaultdict(lambda: [])
def dataset_id(self) -> str:
return self._dataset_id
def get_entities_dict(self) -> Dict:
return self._entities
def get_entity_by_id(self, id: str, create: bool = True) -> Entity:
if create is True:
if id not in self._entities:
self._entities[id] = Entity()
return self._entities[id]
def get_sentence(self, document: Document, id: str, create: bool = True) -> Sentence:
sentence_id = Sentence.abs_id(document, id)
if create is True and sentence_id not in self._sentences:
sentence = Sentence(document=document, id=id)
self._sentences[sentence_id] = sentence
self._document2sentences[document.id()].append(sentence)
return self._sentences[sentence_id]
def get_sentences_by_document(self, document: Document) -> List[Sentence]:
return self._document2sentences[document.id()]
def get_markable(self, document: Document, markable_id: str, create: bool) -> Markable:
return self._markables[markable_id]
def get_markables_by_document(self, document: Document) -> List[Markable]:
return self._document2markables[document.id()]
def add_markable_to_document(self, document: Document, markable: Markable) -> None:
self._document2markables[document.id()] += [markable]
|
STACK_EDU
|
Hello Everyone! As you may or may not remember during that dastardly thing I call a day job, I am a Lead Web Developer at a Chicago Web Design company called Orbit Media Studios. Megan has been after me for ages to actually share some of that knowledge, so here I am. Sharing away.
So you want your blog to rank better in Search Engines? This is commonly called SEO, or Search Engine Optimization. Having an SEO site means that new viewers are more likely to stumble upon your site while just browsing the internet. There’s quite a few ways to go about this, be it through appropriate keywords used, or making sure to submit (and have) and sitemap for Google, and optimizing your site. These are just a few, but one of the most overlooked ones is using Header tags appropriately. Using h1-h6 tags is an important factor in your SEO ranking. The higher you’re ranking, the more likely people are to find your site searching with Google, Bing, or other Search Engines.
(On a side note: Notice above how I specifically said ‘Chicago Web Design’ when referencing the company I work for above. This is a key phrase the company tries to work for, and in fact if you search for that in Google, Orbit usually comes up first. A book blogging site might try to always use the phrase ‘Young Adult Book Reviews’. You want to use something people are really looking for, but also something you can use a lot without it sounding forced.)
Who needs them?
You’ll want to use header tags to break up content for your readers. Since the average viewer skim reads and spends less than a minute on a page, breaking your content up into more easily read and eye-catching chunks will help get more of your information across. What good is writing that really excellent book review if someone doesn’t actually learn anything from it? H1-H6 tags are also weighed more heavily to Google than just regular body text of a post, especially the h1 tag.
What Should I (not) be doing?
- Do use h1-h6 to break up your content as needed.
- Do NOT reuse the H1 tag multiple times on a page. While you can go crazy with the h2-h6, reusing the h1 tag can negatively affect your SEO ranking.
- Do use the h1-h6 tags in order. While you can go from h2, to h3, and back to an h2, try not to skip straight from h2 to h5.
- Do view your page with CSS disabled. Imagine this is what Google would “see” when it views your site. (While you’re at it, keep in mind that this is also how a screen reader would read your site.)
- Do use a keyword in your h1 tag, do NOT abuse this. If you want people to find your review of Iron Knight, your h1 tag should show this. The keywords used in a h1 tag have more weight SEO-wise than keywords in your content. That doesn’t mean you should say Iron Knight in every heading tag however, as this would actually negatively effect your score.
That’s all folks!
|
OPCFW_CODE
|
Cross-modal Integration and Value-Based Decision-Making
Key Research Question
- Does the acquisition of expected values through reward learning affect the cross-modal integration of two multi-modal stimuli that serve as decision cues?
- Does the strength or the efficiency of cross-modal integration affect the learning of the expected values of these multi-sensory cues?
Can the expected values of cues that we learn through repeated associations with a reward change the way how we perceptually integrate these stimuli? Does the efficiency with which we integrate stimuli from different modalities influence how we learn about rewards that are associated with them? These question lie at the heart of a new research project that is investigating the interaction of cross-modal integration and value-based decision-making in great detail.
Each experience that we encounter every day - e.g. sitting in an espresso bar at the harbor of a small Mediterranean fishing village on a sunny day - is comprised of different sensory inputs: images, sounds, smells, tastes, touches. Each of this sensory input is processed in a different primary brain region. Cross-modal integration is the awe-inspiring capability of the brain to bind information from these different sensory modalities together to form a coherent percept.
Cross-modal integration is governed by at least three principles: the spatial rule, which states that if information form different modalities originates from the same spatial location, cross-modal integration is stronger. Similarly, the temporal rule states, if stimuli from different modalities are present closely together in time, cross-modal integration is stronger and more efficient. Finally, the principle of inverse effectiveness, states that cross-modal integration will be stronger, if the uni-modal representations are weak or diffuse. We intend to created stimuli according to the first two rules and investigate, whether stronger or more efficient integration will benefit value-based learning and decision-making.
There are specific areas in the brain that are (among other things) dedicated to the integration of stimuli from different modalities (e.g. temporal parietal junction (TPJ) and superior temporal sulcus (STS)). The primary sensory cortices of different modalities are connected with these integration areas and feed the processed stimulus information to them. In these integration areas this stimulus information is combined. We think that depending for instance on the saliency or the expected value of these different stimuli, one modality will be weighted more heavily during integration and will come to dominate the resulting percept. We intend to manipulate the balance of different modalities during integration by associating the different stimuli with rewards or punishments. We expect that the modality with a higher expected value will come to dominate the cross-modal integration, whereas the modality with a negative expected value will be de-weighted during integration.
Influence of Value-based Decision-Making on Cross-Modal Integration
In this project we are changing cross-modal integration by endowing the constituting stimuli with different expected values through learning. We are using visual and auditory stimuli, which - after an initial measurement of cross-modal activity - will be associated with a reward, a punishment, or no outcome. During learning and afterwards, we will measure cross-modal BOLD activity again and determine, if the connectivity strength and the composition of cross-modal BOLD activity has changed. We intend to demonstrate this using model-based fMRI analysis (Gläscher & O'Doherty, 2010), connectivity analyses (non-linear Dynamic Causal Modeling, Stephan et al, 2008) and through Pattern Component Modeling (Diedrichsen et al., 2011), a novel form of multivariate pattern analysis for fMRI data.
Influence of Integration principles on Value-Based Decision-Making
Here, we investigate the reverse side of the interaction of learning and cross-modal integration. We are using the spatial and temporal rule to create stimuli that will be integrated more or less strongly and associate them with different rewards. We are testing whether reward associations between highly integrated cross-modal stimuli are learned more easily or quicker (estimated via a learning rate) than non-rewarded multi-sensory stimuli.
|
OPCFW_CODE
|
Updated: Oct 28, 2020
I am going to talk about an interesting part of my ongoing Computer Science Coursework as I am finding it fascinating. A part of my coursework needs to time code that calculates the runtime of each function.
My code attaches decorators to every function in the code it is analyzing which records the starting and stop time of each function in an array. That array is then deconstructed to work out how much total time is spent solely in one function not counting the time that another function within that function runs for. So a user can see exactly what parts of an algorithm are taking up the time. The code then outputs a graph such as the one below with the name and times spent in each function on the node and the number of times one function called another on the edges.
Adding up all the times in each node would then tell you how much time the code takes. For example, you can see on the graph above that most of the time is taken inside Toggle and Turn even though Finding is called 300 times. My algorithm for working out the times works as follows:
Just before a function is run the time it starts is appended into an array along with a unique name for that instance of the function. When the function is finished the decorator then also appends the finish time of the function into the array along with the name so array might look like:
[(start 1 , 10), (add 1 , 12), (add 1 , 13),(add 2 , 15),(add 2, 17), (start 1 ,18)]
My code then loops through this array looking for where it can see the same instance of a function start and stop next to each other and then concatenates them into a time adding on the (end time - start time) onto the total time spent inside that function e.g.
after first pass array would look like:
[(start 1 , 10), 1,(add 2 , 15),(add 2, 17), (start 1 ,18)]
TimeSpentInAdd = 1
It would then pass again looking where either start-stop next to each other or solely separated by times. If times in between it would calculate (start time - stop time) - sum of times in between e.g.
after second pass:
[(start 1 , 10), 1,2, (start 1 ,18)]
TimeSpentInAdd = 1 + 2 = 3
After third pass:
TimeSpentInStart = (18-10)-(1+2) = 5
Concluding that 3 seconds was spent inside the add function and 5 seconds inside the start function. However, this is a very slow algorithm basically of order O(n^2) which takes way too long analyzing code that has millions of function calls such as code using lots of recursion such as the Fibonacci Recursive code shown in the Project Euler Blog. I am now coming up with a faster algorithm for doing this and I believe the best way is to somehow calculate the times inside the decorator functions while the code is running as soon as timestamps are returned instead of having to unwind the whole array at the end.
I am also now deciding what algorithms can be run on the graphs created to help the user e.g. maybe longest way through the network to show what paths need optimization and ideas like that.
|
OPCFW_CODE
|
Deflate64
SD-183, originally created on 7/21/2004 21:04:18 by John Reilly
Goes hand in hand with Zip64 pretty much, gives better compression that
standard deflate
Comment from David Pierson on 9/2/2010 02:00:49:
"Deflate64 is a proprietary and undocumented extension of the protocol"
Do we really want to do this in an open source project?
unzip on Ubuntu is able to extract an archive that uses Deflate64. Is open source really a barrier?
@Numpsy if you want to, sure go ahead! But no, I don't think this has been requested much and I have yet to find a Deflate64 file out in the wild.
I am closing this to keep the Issue count down, but I can open it again if there is further interest or activity.
But no, I don't think this has been requested much and I have yet to find a Deflate64 file out in the wild.
When Windows OS create archive with size more than 2GB it create it with Deflate64 method. And there are too many archives in world created with Deflate64. I'm a developer who is trying to find a library which can extract files from archives like this. I have checked 3 libraries and them can't extract files from Deflate64 archives.
System.IO.Compression too slow, required .NET 4.6 to extract Deflate64 (but I have to use 4.5)
DotNetZip can't extract Deflate64 (but has merge request with needed code!)
SharpZipLib can't extract. (piksel closed this topic)
The reason for System.IO.Compression being 'slow' might be because (in older runtimes at least, I can't say offhand if .NET 5 is different), plain deflate used an optimized build of ZLib to do the decompression, where deflate64 used a managed implementation.
My thought (and from memory, the thought from adding support to DotNetZip) was to integrate the internal MS implementation, in which case the perf would be the same (if you're using .NET Core 3+ then that can be optimized with intrinsics, but that's another piece of work)
I believe (I haven't looked since I made that last comment from last year) that SharpCompress has previously integrated that same MS code, so maybe have a look at that?
Hey, checking whether there has been any movement regarding support for Deflate64 on the library. I am most interested in it supporting it on the InputStream path.
The reason I ask is the same @maximkotelnikov mentioned, Windows uses Deflate64 as default for large files, and it'd be nice to have this support.
No, not that I am aware of. But if windows creates files using deflate64 I guess this warrants keeping the issue open. That being said, that doesn't mean that it will be included anytime soon unless someone actually starts working on it.
I'd still hoped to have a go with the MS code at some point, but I haven't had any time to look at anything complicated recently :-(
Noticed that DotNetZip have implemented Deflate64 support when unzipping https://github.com/haf/DotNetZip.Semverd/pull/182 . Hopefully it won't be very difficult to port this to SharpZipLib. 🤔
Just want to add another voice to it: SharpZipLib works very well for my project. But because others can input any zip file they have, I sadly also came across the "Deflate64 not supported" issue. It would still be great if that would be added. If possible, it would also be nice if the exception "Compression method not supported" would have specifically stated "Deflate64 compression method not supported" to save time on understanding what goes wrong. Thanks!
|
GITHUB_ARCHIVE
|
SQL combine two columns if data is the same in result
I have the following SQL:
;
WITH CTE_Totals AS (
SELECT DISTINCT
CASE WHEN LoginName = Letter1SentBy
THEN LoginName
WHEN LoginName = Letter2SentBy
THEN LoginName END AS Logonuser,
sum(CASE WHEN Letter1SentDate = DATEADD(WEEK, DATEDIFF(DAY, 0, getdate()) / 7, 0)
THEN 1
ELSE 0 END) AS MondayL1,
SUM(CASE WHEN Letter2SentDate = DATEADD(WEEK, DATEDIFF(DAY, 0, getdate()) / 7, 0)
THEN 1
ELSE 0 END) AS MondayL2
FROM MainCase
WITH ( NOLOCK ) LEFT OUTER JOIN
Left OUTER JOIN
letters_sent WITH (NOLOCK) ON MainCase.casekey = letters_sent.casekey
users WITH (NOLOCK) ON Maincase.userCaseNo = users.userCaseNo
GROUP BY LoginName, Letter1SentBy, Letter2SentBy
)
SELECT DISTINCT
Logonuser,
sum(MondayL1 + MondayL2) AS total
FROM CTE_Totals
GROUP BY Logonuser, CTE_Totals.MondaytL1, CTE_Totals.MondayL2
ORDER BY Logonuser ASC
what i am trying to achieve is :- originally i had 2 sql querys, the first to sum all letter1's sent by a user on a particular day the second to sum all letter2's sent by a user on a particular day. i want to combine both queries into 1. so i have the user name and the total of letter 1 and 2s sent.
Maincase Table contains data such as the
LoginName
letters_sent table contains fields
Letter1SentBy, Letter1Senton (date/time field), Letter2SentBy, letter2senton (dat/time field)
the user on the case can be different to the user who sent letter 1 and who sent letter 2 i want to find the total of letters 1 and 2 sent per user.
my issue is the user sending the letter is stored in a different field. i have tried to combine this in my above query but im getting the following results:-
Billy 1
Billy 6
Bob 5
Bob 2
if the person who sent letter 1 = the person that sent letter 2 i wouldlike results to show as
Billy 7
Bob 7
Edit your question and provide sample data and desired results.
sorry dont quite undertand - i thought i had given data and desired results?
. . You showed a complex query and some results. We have no idea what the base data looks like.
Your final part of the query should be like..
SELECT
Logonuser,
sum(MondayL1 + MondayL2) AS total
FROM CTE_Totals
GROUP BY Logonuser
ORDER BY Logonuser ASC
That should group by the Logonuser name and sum the overall value as total.
|
STACK_EXCHANGE
|
<?php
/**
* Created by PhpStorm.
* User: Marco
* Date: 2015-03-14
* Time: 12:09 AM
*/
class ChirpHandler implements IHandler {
private $CHIRPS = [
"hey %s, make like a tree and fuck off"
, "fuck %s"
, "You shouldn't play hide and seek %s, no one would look for you"
, "I hope %s falls down the stairs"
, "You best unfuck yourself %s, or I will unscrew your head and shit down your neck"
, "I may be a robot but I'll fuck your shit right the fuck up %s"
, "%s, you're about as fucked in the head as Gen!"
, "Does the tin man have a sheet metal cock? I don't know, does %s have cuts on their mouth?"
, "Keep talking %s, someday you'll say something intelligent."
, "I thought of you all day today %s. I was at the zoo."
, "I'll never forget the first time we met %s - although I'll keep trying."
, "Every girl has the right to be ugly, but %s abused the privilege."
, "Do you still love nature, despite what it did to %s?"
, "%s is so narrow minded when you walk your earrings knock together."
, "%s is lucky to be born beautiful, unlike me, who was born to be a big liar."
, "Before %s came along we were hungry. Now we are fed up."
, "Someone said that %s is not fit to sleep with pigs. I stuck up for the pigs."
];
const MARKYBOT_CHIRP = "markybot chirp ";
public function handle($input) {
// We're set to off right now, do nothing:
if ( !is_active() ) { return; }
// no text message posted, do nothing...
if ( !isset($input->text) ) { return; }
if (($index = stripos($input->text, self::MARKYBOT_CHIRP)) !== FALSE) {
$person_to_chirp = substr($input->text, $index + strlen(self::MARKYBOT_CHIRP));
if (($index = stripos($person_to_chirp, "marco")) !== FALSE) { // don't chirp marco
send("Fuck you, I love Marco");
send($this->chirp($input->name));
} else {
send($this->chirp($person_to_chirp));
}
}
}
function chirp($name) {
return sprintf($this->CHIRPS[mt_rand(0, count($this->CHIRPS) - 1)], $name);
}
}
|
STACK_EDU
|
SQL batch-command does not return correct values for DELETE
As described here: https://github.com/orientechnologies/orientdb/wiki/SQL-batch, it should be possible to get all responses of the statements in a batch back as an array for further evaluations. This works well for Insert/Create, which suggests the feature is supported by OrientDB v2.0, which I am using. But it seems not working for Delete statements.
It is not possible to execute LET command in OrientoDB console, nor in the Web Studio (BTW: also in this regard, I am not sure, whether it is an issue or LET is intentionally not supported in both tools?). Thus, I can only demonstrate the issue with a node.js program. In the code, the line q1.commit().return(['$c1', '$c2', '$c3']) works well, contrary to the line q2.commit().return(['$d1', '$d2']) for deletion. For deletion, only a single value can be returned, with return('$d?'), but not multiple values.
var Oriento = require('oriento');
var db = Oriento({
host: 'localhost',
port: 2424,
user: 'root',
password: 'root'
, logger: {debug: console.log.bind(console)}
}).use('GratefulDeadConcerts');
var q1 = db.createQuery();
q1.let('c1', 'create vertex v set name = "4f"');
q1.let('c2', 'create vertex v set name = "1f"');
q1.let('c3', 'create vertex v set name = "5d"');
q1.commit().return(['$c1', '$c2', '$c3']);
var q2 = db.createQuery();
q2.let('d1', 'delete vertex v where name = "4f"');
q2.let('d2', 'delete vertex v where name = "5d"');
q2.commit().return(['$d1', '$d2']);
q1.all().then(function (res) {
console.log(res);
return q2.all();
}).then(function (res) {
console.log(res);
}).finally(function () {
db.server.close();
});
yes.. it's doesn't return correct value if you have exapnd function in any select query. as for example below. it return the invalid result for user and payment, but except account.
BEGIN
LET account = SELECT * FROM account WHERE email =<EMAIL_ADDRESS>LET user = SELECT expand(in("own")) FROM account WHERE email =<EMAIL_ADDRESS>LET payment = SELECT EXPAND(out("own")[@class =payment]) FROM $user
COMMIT
RETURN [$account, $user, $payment]
[
{ '@type': 'd',
'@class': 'account',
email<EMAIL_ADDRESS>password: '00a8a1d8571d5cd2bf0
username: 'ckgan2004',
language: 'en',
timezone: 'Asia/Kuala_Lumpur',
active: false,
online: false,
ctime: Sun Feb 22 2015 20:45:0
mtime: Sun Feb 22 2015 23:08:1
nickname: 'ckgan2004',
'@rid': '#30:82' },
{ '@rid': '#5:0' },
{ classId: -2, value: null }
]
look forward anyone to solve this issue as soon as possible
DELETE command, by default, returns the deleted record. If you need the deleted record, append "RETURN BEFORE" to the command.
@lvca: I am sorry, I do not agree with you. Your response is inaccurate at least in the following two ways:
DELETE VERTEX does not support RETURN-clause, not at the moment. I am testing on v2.0.7.
If you say, DELETE command returns the deleted record by default, why is it necessary to add the RETURN BEFORE directive at all?
A mere DELETE would accept an extra RETURN clause whether needed. But I want to delete a vertex. Please suggest a solution, thanks!
@mamobyz You're right: DELETE VERTEX doesn't support RETURN keyword. I switch this as new implementation.
Implemented.
|
GITHUB_ARCHIVE
|
Mist, the pioneer in self-learning wireless networks, announced the availability of the world’s first artificial intelligence (AI)-driven Virtual Network Assistant (VNA) for wireless operations and integrated helpdesk. Powered by Mist’s AI engine, Marvis, VNA is a new cloud-based micro-service that uses Natural Language Processing (NLP) to make it easy to query the Mist global cloud for real-time monitoring of mobile client activity. VNA uses data science to easily identify Wi-Fi issues, understand the impact of wireless problems, correlate events across the wireless/wired/mobile device/IoT domains, and auto alert on anomalies. VNA makes IT smarter and faster and ensures the best experience for wireless users.
“VNA is the next step in Mist’s journey towards building an intelligent AI-driven network that simplifies operations, lowers operational expenses and gives unprecedented insight into the wireless user experience,” said Bob Friday, CTO and co-founder at Mist. “We started with a robust distributed micro-services-based software architecture built on a cloud-based platform that collects and manages an enormous amount of data. On top of this, we implemented a patented methodology for organizing and classifying this data into domain specific service levels. With today’s announcement, Mist now delivers a VNA that can answer questions on par with a wireless domain expert.”
According to a recent Gartner report, “The complexity of the access layer has risen because there are fewer IT resources to manage the increasing requirement for wireless connectivity. Instead of just collecting information at the edge of the network, vendors are using machine learning algorithms to automate discover, management, troubleshooting and resolution to automate the access layer.”
In another report Gartner states, “In the three- to five-year horizon, AI solutions will be well-positioned to supersede dedicated network administration resources in the majority of areas concerning the fine-tuning of every aspect of the intelligent access layer network.”
Natural Language Processing puts a face on Marvis†
VNA brings NLP to network operations so IT staff can easily understand their network and client environment without having to manually sift through a myriad of data in numerous locations. Types of queries include:
- Why is Bob’s smartphone having a problem?
- Were there any anomalies between 7 a.m. and 9 a.m. on the main campus?
- List the three sites with lowest performance.
- How many clients are on the guest network?
AI-driven operations simplify wireless troubleshooting and provide unprecedented insight for integrated helpdesk
Mist’s AI engine, Marvis, †uses machine learning to make IT smarter, solve network issues faster (hours to minutes), and make helpdesk personnel more efficient at problem resolution. This enables VNA to perform unique troubleshooting and helpdesk functions like anomaly detection, event correlation and confidence ratings to rapidly solve (or avoid) the following types of wired, wireless and device problems:
- DHCP (duplicate addresses, server down, …)
- RADIUS (wrong user name, expired certs, …)
- WAN (packet loss, intermittent dropping, …)
- WLAN (interference, coverage, roaming, …)
- Security (Pre-shared key typed incorrectly)
With VNA (powered by Marvis), IT becomes proactive and gets smarter over time, so mobile users always get an amazing Wi-Fi experience.
The Mist Virtual Network Assistant is available now for limited release and will be generally available in March 2018.†
|
OPCFW_CODE
|
Azati carefully studies the existing ecosystem to fully understand the infrastructure, its features and the potential to provide more accurate recommendations and instructions for further actions.
Azati helps companies transfer applications to high-performance cloud platforms that can improve the efficiency and cut-down infrastructure maintenance costs within the enterprise.
Iteration & Support
Development Operations is a continuous process that requires constant involvement. Azati optimizes the way engineers ship new versions to the customers to improve the key performance indicators.
Our DevOps Process Flow
DevOps lifecycle is all about driving production by bridging the gap between development and operations through continuous integration, deployment, delivery and feedback.
Continuous Integration (CI) allows the business to automate software development and application testing in a shared repository. New commits are isolated, collected, and tested before they are merged into a master branch.
With continuous integration, it becomes easy to spot the majority of errors and eliminate critical bugs as quickly as possible. Continuous integration minimizes the bug fixing costs and provides the constant availability of a stable version for public demonstration.
After new features are pushed to the repository, yet another version can be automatically deployed to the staging server, pass some tests there, and get prepared to manual roll out on the production server.
Continuous Delivery involves automatic code deployment to the staging server. This operation can be carried out manually or automatically.
The team does not need to prepare minor releases manually, as the process is fully automated. It helps the developers focus on the creation of gorgeous and handy products, and stop perceiving releases as something scary.
Continuous Deployment is very similar to Continuous Delivery with the only difference – after the new version is thoroughly tested and is considered as “stable”, it is automatically released to the production server after passing additional checks.
Continuous Deployment is an ideal solution for projects that are built with tiny iterations – where there are no huge releases, but features are rolled out on the go.
Following this approach, new features can reach customers in a matter of hours.
There are two the most important types of information concerning software: data on how customers use the application, and feedback from these customers about application performance and usability.
Shortly, continuous feedback is defined as a mechanism in which a DevOps specialist receives ongoing feedback. It allows businesses to find out the weak sides of the product.
Interested parties take appropriate measures to improve the application according to replies and expand the capabilities of its users.
DevOps as a Service
DevOps engineers provide consulting and advisory services that include system evaluation, infrastructure analysis, plan development, and determination of the right toolset.
Azati offers the most suitable automation options based on an in-depth analysis of the infrastructure.
DevOps without automation is close to impossible as one of the its principles is to rely on automation.
Automation of repetitive processes minimizes possible risks and improves productivity. Our experts use cutting-edge licensed tools and open source apps to enhance the quality of the service.
After completing all the necessary tasks, Azati helps customers adapt to the new DevOps processes and improve the existing software development workflow.
Engineers are analyzing how flexible this system, how it copes with typical issues, and whether it suits the current business and developer’s requests.
Business analysts and engineers go in-depth while researching the solution. Such an approach helps us to gain off-hand experience and unique knowledge in various industries.
Azati often hires professionals with scientific and academic backgrounds to share their in-depth technical knowledge and bring a new vision to the development of new technologies.
If the existing application is not built in an optimal way, we recommend alternative solutions that may improve the solution without disrupting the existing application infrastructure.
Featured Case Studies
Custom system for engineering drawings digitization powered by artificial intelligence to extract data from on-paper maps, schemes, and other technical documents.
At Azati Labs, our business analysts helped our partner to build progressive web scraping platform for US-based real estate firm. The main idea of this solution was to generate a customer profile using the information extracted from various websites.
The engineers helped our partner to build a huge banking system for deposit operations handling and bank account processing.
|
OPCFW_CODE
|
I intend to build up this guide into a series of more complex “lessons” so eventually we can read packets as they are on the wire and you will be able to interpret what you are seeing without too much difficulty.
Time to start with the basics. In order to understand and read packets, we need to know the fundamentals. How do computers and network communicate? Essentially by binary and hexadecimal. This is a series of zero’s and one’s and the numbers 0 to 9 with the letters A to F.
When I was taught this in school, some 20ish years ago, I honestly found it a bit complicated, but looking back it was only complicated due to the way it was taught. Hopefully this methodolgy is simple for you to understand.
Counting in binary is not too difficult, the values can only be a 0 or a 1, an off or on value. However what the off or on values represent is the important ‘bit’.
You essentially have 8 bits in a byte and this makes binary reasonably straight forward, for counting I find it best to create a quick table, this allows me to visually count, rarther than attempting to work out everything in my head.
What the chart shows, we have the 8 bits represented across the top. The numbers 128 down to 1. The base 10 is the representation of the value in decimal, the numbers we are familar with. The values 128 to 1 are powers of 2.
We have the following
20 = 1
21 = 2
24 = 16
This is essentially doubling up each time, and this is how binary works and is pretty straight forward.
In my example above, we have a a value of 1 in the columns that represent 128, 64, 8 and 4. So all we do now is add these up.
128+64+8+4 = 204
128+32+4+4 = 172
For me this chart makes it easy, whenever I am required to convert binary into decimal, I always create the chart on a bit of scrap, fill in the relevant fields and add them up.
Now you can work out Hexadecimal in a similar way, which I find is far easier for my poor brain to understand. When I was taught hex in school, I was taught to convert the hex to binary and then into decimal, which you can do, however it creates an extra step, which takes longer and there is one extra step to make a mistake.
0 to F
Seems complicated? Not really, its as easy as binary.
160 = 1 = 20
161 = 16 = 24
162 = 256 = 28
163 = 4096 = 212
This simple hex chart covers what you need to know. Starts from 0 to 9, decimal 10 to 15 is represented with A to F.
So how do we calculate hex values? I will show you the same methodolgy that I use for binary conversion. A nice and simple chart.
So what does this mean exactly? Well we do a similar method to binary.
We have 0x20 which is how we represent hex, when you see it in this format, it us telling you this is a hexadecimal value.
0x20 = (2 x 16) + (0 x 1) = 32 + 0 = 32 decimal
0x203 = (2 x 256) + (0 x 16) + (3 x 1) = 512 + 0 + 3 = 515 decimal
0x378 = (3 x 256) + (7 x 16) + (8 x 1) = 768 + 112 + 8 = 888 decimal
0xBAF = (11 x 256) + (10 x 16) + (15 x 1) = 2816 + 160 + 15 = 2991 decimal
So in the first example, we have the hex value of 20. So these fill in the 2 colums on the right, 2 in the 16 value and 0 in the 1 value. To calculate we just multiple 2 by 16, so we have 32. 0 multiplied by 1 is 0, so the total value in decimal is 32.
The second example, we have 0x203, so using the same formula we have to multiply 2 by 256, multiply 0 by 16 and multiply 3 by 1, and we then just add these figures up giving us the total of 515 decimal.
3 thoughts on “Hex and Binary”
You should slap it so anyone can comment not just registered users, otherwise there’ll be not many comments.
Chat soon, bro =D
Yeah I need to find something to stop the spam, I dont want all those bots to just fill comments with drug spam.
How are things with you? We need to catch up sometime!
Good! Busy! PhD’s a massive time vampire – but I did finally write a software library as one of my papers:
Took me like 18 months to get that thing working properly… just crazy.
Looks like things are going well for you – which is fantastic. Ping me an email some and we can chat =D
You must log in to post a comment.
|
OPCFW_CODE
|
Using the Alternating Direction Method of Multipliers
The alternating direction method of multipliers (ADMM) has recently sparked interest as a flexible and efficient optimization tool for imaging inverse problems, namely deconvolution and reconstruction under non-smooth convex regularization. ADMM achieves state-of-the-art speed by adopting a divide and conquer strategy, wherein a hard problem is split into simpler, efficiently solvable sub-problems (e.g., using fast Fourier or wavelet transforms, or simple proximity operators). In deconvolution, one of these sub-problems involves a matrix inversion (i.e., solving a linear system), which can be done efficiently (in the discrete Fourier domain) if the observation operator is circulant, i.e., under periodic boundary conditions. This paper extends ADMM-based image deconvolution to the more realistic scenario of unknown boundary, where the observation operator is modeled as the composition of a convolution (with arbitrary boundary conditions) with a spatial mask that keeps only pixels that do not depend on the unknown boundary. The proposed approach also handles, at no extra cost, problems that combine the recovery of missing pixels (i.e., inpainting) with deconvolution. We show that the resulting algorithms inherit the convergence guarantees of ADMM and illustrate its performance on non periodic deblurring (with and without inpainting of interior pixels) under total-variation and frame-based regularization.
The alternating direction method of multipliers (ADMM),originally proposed in the 1970’semerged recently as flexible and efficient tool for several imaging inverse problems, such as denoising deblurring inpainting ,reconstruction, motion segmentation, to mention only a few classical problems (for a comprehensive review, see ). ADMM-based approaches make use of variable splitting, which allows a straightforward treatment of various priors/regularizes , such as those based on frames or on total-variation (TV) , as well as the seamless inclusion of several types of constraints (e.g., positivity) , .ADMM is closely related to other techniques, namely the socalledBregman and split Bregman methods]and Douglas-Rachford splitting .Several ADMM-based algorithms for imaging inverse problemsrequire, at each iteration, solving a linear system (equivalently,inverting a matrix) .
This fact is simultaneously a blessing and a curse. On the onehand, the matrix to be inverted is related to the Hessian of the objective function, thus carrying second order information;arguably, this fact justifies the excellent speed of these methods,which have been shown (see, e.g., ) to be considerablyfaster than the classical iterative shrinkage-thresholding (IST)algorithms and even than their accelerated versions. On the other hand, this inversion (due to itstypically huge size) limits its applicability to problems where it can be efficiently computed (by exploiting some particular structure). For ADMM-based image deconvolution algorithms , this inversion can be efficiently carried out using the fast Fourier transform (FFT), if the convolution is cyclic/periodic (or assumed to be so), thus diagonal in the discrete Fourier domain. However, as explained next, periodicity is an unnatural assumption, inadequate for most real imaging problems. In deconvolution, the pixels located near the boundary of the observed image depend on pixels (of the unknown image) located outside of its domain. The typical way to formalize this issue is to adopt a so-called boundary condition (BC).
• The periodic BC (the use of which, in image deconvolution, dates back to the 1970s ) assumes a periodic convolution; its matrix representation is circulant1, diagonalized by the DFT1, which can be implemented via the FFT. This computational convenience makes it, arguably, the most commonly adopted BC.
• The zero BC assumes that all the external pixels have zero value, thus the matrix representing the convolution is block-Toeplitz, with Toeplitz blocks. By analogy with the BC for ordinary or partial differential equations that assumes fixed values at the domain boundary, this is commonly referred to Dirichlet BC.
• In the reflexive and anti-reflexive BCs, the pixels outside the image domain are a reflection of those near the boundary, using even or odd symmetry, respectively. In the reflexive BC, the discrete derivative at the boundary is zero; thus, by analogy with the BC for ordinary or partial differential equations that assumes fixed values of the derivative at the boundary, the reflexive BC is often referred to as Neumann BC.
Illustration of the (unnatural) assumptions underlying the periodic, reflexive, and zero boundary conditions.
|
OPCFW_CODE
|
In the article “Building a Fast One-Shot Recon Script for Bug Bounty” by ProjectDiscovery you can find a great guide on developing a script for a bug hunter or pentester. The author did an excellent job. Let’s say many thanks to @pry0cc. However, this guide requires working with many utilities. Let’s see how to rewrite it point by point in order to understand whether it is possible to get by with Netlas.io.
So we will be working with Netlas CLI tool and web application. To install CLI tool use “pip install netlas” (Python 3 should be installed). You will also need an API key, which can be found in your Netlas.io users’ profiles.
Have you already chosen an organization that will pay you for the vulnerabilities found?
1. Root domains
The first step is to find as many root domains as possible. This is perfectly done with Netlas.io Domain Whois Search tool. Request the domain of your interest with the first request and find the Registrant data as it is shown below.
This search result gives us an organization name and some other properties, that we can use to build further search queries. Let’s search domains registered to the same organization name with the next query:
Wow! More than 3 thousand domains were found. Nice result! Sometimes it is possible to use “registrant.email” field instead.
And this is how these queries will look on the command line:
Let’s take a look at command line queries in more detail. The key “-d whois-domain” here means the source from which the data will be obtained, in this case, Netlas.io Domain Whois Search tool. Option “-i registrant” removes all fields from the output except for the “Registrant” section. Download command gives results as a stream (without pagination). Option “-c NUMBER” indicates the number of results to download. Use “netlas count” to get the exact count of results. And finally “jq” command helps us to format JSON output to the list of domains we need. In further queries, these moments will be missed, but their meaning will remain the same.
Finally, it is possible to append a list of domains and subdomains using Netlas Certificates Search tool as it is shown in the next screenshot.
Most of these domains should be already listed. But since this search is using the organization name, there is a chance to find additional root domains and subdomains.
Be careful on this step due to certificate names section usually contains wildcard domains. Wildcard domains are not a problem when you work with Netlas DNS Search tool — there are no wildcard domains there. But significant part of SSL certificates are issued to wildcard domains. So we have to filter it out with “sed” command.
2. Bonus step: IP ranges
This step is not presented in the original article, but I believe it’s necessary, especially for large companies.
For example, an IP-address on the screenshot is related to our company of interest. What is more interesting this IP address is included in IP range, which is definitely operated by our company of interest. So we use IP Whois Search tool to find networks related to our target.
It is recommended to make Steps 1 and 2 without automation. Unfortunately, whois databases are often filled incorrectly or incompletely. Using automation in these steps is easy to make a mistake. We assume that selected IP ranges are saved in “target_ip_ranges.txt” in form of CIDRs (e.g. 188.8.131.52/24 or 184.108.40.206/29).
So at this point, there should be three files:
You can create them using CLI as it is shown above, or you can search and download these lists from Netlas web application, which is sometimes a more convenient way. These files will be used as input for our recon script.
3. Subdomain enumeration
So we have an initial list of targets. Let’s start with subdomains. A lookup like
in the DNS Search tool will instantly give us the results we need. Another way is to use regex. This is a bit more flexible. For example we can query a root domain and subdomains with one simple search:
You can find explicit information and examples of regex usage in Netlas help pages. The “a:*” means to get only records where at least one A-record exists.
Saved CIDRs list is another option. We can do forward DNS transforms for the whole subnet using commands like:
You can do the same using the command line. By the way, you can use IP addresses instead of CIDRs. Netlas will behave in the same way.
Let’s put all these requests together in one script and run it:
user@host:~$ ./netlas_domains_and_ip_recon.sh target_root_domains.txt
I got almost 1,5K of IP-addresses and 4,5K of domains and subdomains when I tested this script. About 15K Netlas coins were spent and about 10K requests were sent. The initial data of the goal is quite voluminous. So it took a lot of requests.
4. Downloading HTTP responses
In the original article, the author ends the process of gathering data by downloading index pages from HTTP servers hosted on the attack surface of interest. Netlas.io Responses Search tool is what we needed to archive this.
You can download index pages using the “host:” filter, the “ip:” filter, and the “prot7:” filter. A query like “ip:target_ip” is used to address all services on target_ip. A query like “host:target_domain” is used to address specific web services. The “Prot7” filter is used to filter responses by OSI layer 7 protocols, like HTTP, FTP, SSH, and so on.
Sometimes Netlas returns not only index pages. This happens because of 301 and 302 redirects. So if you want only root pages use the ‘path:”/”’ filter.
So, we have received several services of interest to us on the http/https protocols. What’s next? It’s just json/yaml parsing. It already contains the body of the sites being studied, so we can easily get the necessary information from there. Moreover, in the event that we are looking for a specific title/favicon/text, this item can be combined with the fifth item as an addition to the query. For example:
Here is the script I used to collect html pages from Netlas.
If you execute it passing a file with domains or IPs as an argument, you will get a set of HTML files in response subfolder named like this:
A file’s names consist of a domain/IP, a port number, a path (all “/” replaced by “_”), and an HTTP status code after an exclamation mark.
Further points of the original article are about the processing of the results: their tokenization, sorting, and parsing. All those questions are out of the scope of this short note.
So we’ve seen some of what Netlas.io is capable of by looking at the most basic commands that can make it much easier to script pen testers and bug hunters.
You can find described scripts in Netlas.io Github repo. Give this article some likes and we will make some improvements or maybe rewrite it to full-fledged Python scripts.
|
OPCFW_CODE
|
I have written about corpora, concordancing, and DDL on this site before. Last year, my colleague and I completed a semester-long quantitative research project and co-wrote a paper on using DDL in the classroom (which has now been rejected three times!). I used to be a big fan of teaching students how to use these tools as an alternative reference and learning resource. However, due to lack of patience with computer illiterate “digital natives“, heaps of incomprehensible input that is difficult for learners to parse, and the paucity of the linguistic sixth sense among students, this kind of practice fell out of favor with me. Then, I stumbled upon Cynthia Quinn’s (2014) article in ELT Journal, and now the interest has been slightly rekindled. A snowball effect took place after reading this article, and I was happy to find a number of new corpus tools and active corpus linguistics websites. I’m not sure what effect this will have on my teaching, but I do present to you the latest Research Bites.
Quinn, C. (2014). Training L2 writers to reference corpora as a self-correction tool. ELT Journal. [$link]
Introduction and Findings
Quinn’s article outlines how she introduced the Collins Wordbanks Online corpus to her Japanese EFL university students in order to help them self-correct teacher coded errors on their essays. She discovered that most students found the corpora useful, especially for easily identifiable preposition, word form, and article errors – but not so much for more lexical (as opposed to lexicogrammatical) items like poor word choice. She also found students enjoyed finding more natural and varied language patterns with which they could express themselves.However, as is typical with DDL, students often found the interfaces, search queries, and data difficult to wade through. Nevertheless, she found that “corpus referencing was a positive experience for the majority of learners who agreed that it could improve their written expression”. Because of this, it remains a worthwhile tool to introduce, if not for its effectiveness, then at the very least, for its ability to supplement or supplant dictionaries, thesauruses, and translation tools.
There is a time investment and learning curve to doing DDL, and Quinn’s article explained how she scaffolding concordancing to address these issues. Here is what she did (note: my outline below does not necessarily represent the way her introduction was organized in the article):
For the first five 90-minute classes (about half of each class spent on DDL):
- Introducing corpora
- Introducing students to the concept of a corpus
- Showing the types of rich data that can be gleaned from a corpus
- Justifying corpus use
- Comparing corpora to other resources
- Showing students how a corpus may be better than other resources in some situations
- This is especially useful, as students need to often convinced to use such a tool
- Paper-based practice
- Numerous other researchers have pointed out that it is easier to make sense of concordance data if it is first presented on paper
- Students practiced essential DDL skills, learning:
- scanning for linguistic features
- identifying language patterns
- making “pragmatic generalizations” about the patterns
- Controlled practice where “question prompts guided learners to notice meaning and usage pattern”
- Controlled computer-based practice
- Before using the online corpus, students completed exercises to learn important vocabulary such as query, part of speech, lemma, token, etc.
- Students did in-class searches on terms from class readings
- Students investigated a single word for homework and reported the information they found.
- Students discussed these reports with classmates
After the first five class sessions:
- Controlled revision practice
- Independent practice
- After students wrote their essays and the teacher gave them feedback (content and language), students worked to correct their own errors using the corpus.
- Students kept a revision log to document what they had found, changed, and their experiences with DDL
What Quinn offers is a model way to introduce corpora usage to students. She presented it in a logical fashion which naturally led to learner uptake and clearly helped students. If anyone is taken with using DDL in their classrooms, I highly recommend the model Quinn used. But, as she said, there is a certain time investment (not to mention the need for a computer lab) that is involved. What this research report lacks is an empirical aspect which looks at not just learner feelings about using corpora, but actually tracks their effective employment of such a tool.
If going full blown concordancing scares you, as it should if you have ever played with COCA, there are a number of simpler corpus tools out there. Some that I use, either behind the scenes to create materials, or in-class with students are:
|
OPCFW_CODE
|
Angular 4 - Router redirectTo using variable segment
In the routing Routes below, I'm receiving the following error when im trying to fetch http://localhost:4200/
ERROR Error: Uncaught (in promise): Error: Cannot redirect to '/:userCountry/:userLanguage/home'. Cannot find ':userCountry'.
Error: Cannot redirect to '/:userCountry/:userLanguage/home'. Cannot find ':userCountry'.
at<EMAIL_ADDRESS>(router.es5.js:1784)
Every other routing path is working, it's only the redirectTo who seems to not substitute the value of variable userCountry (and userLanguage also).
Any help would be much appreciated.
const appRoutes: Routes = [
{ path: '', redirectTo: '/:userCountry/:userLanguage/home', pathMatch: 'full' },
{ path: ':userCountry/:userLanguage/home', component: HomeComponent },
{ path: ':userCountry/:userLanguage/about', loadChildren: './about/about.module#AboutModule' },
{ path: ':userCountry/:userLanguage/terms', loadChildren: './terms/terms.module#TermsModule' },
{ path: ':userCountry/:userLanguage/privacy', loadChildren: './privacy/privacy.module#PrivacyModule' },
{ path: '**', component: PageNotFoundComponent }
];
My variables already have value when the error appends.
Here is an extract from my console.log()
[AppRoutingModule.constructor]this.userLanguage=fr
[AppRoutingModule.constructor]this.userCountry=ca
...
ERROR Error: Uncaught (in promise): Error: Cannot redirect to '/:userCountry/:userLanguage/home'. Cannot find ':userCountry'
You need to pass default values to userCountry and userLanguage for route matching when it is in the '' route. You can do like
const appRoutes: Routes = [
{ path: '', redirectTo: '/germany/ge/home', pathMatch: 'full' },
{ path: ':userCountry/:userLanguage/home', component: HomeComponent }
...
}
I'm using a geoip service in order to assign the userCountry variable ('us' for United States, 'ca' for Canada and so on...), i cannot hardcode the userCountry value because my app show different content depending on which country the user is. The same goes for userLanguage, i dont want to show english content to french user or vice versa by hardcoding the country/language for every user who will comme to the homepage of my website, i have to dynamically assign theses values for my homepage.
So in this case you need first to load another component for the '', from it get the appropriate values and then redirect user to the HomeComponent
Hi, thank you it solved my problem. However, i dont understand why i have to use another component in order to do the redirect when in this tutorial the guy use variable directly in redirectTo https://vsavkin.com/angular-router-understanding-redirects-2826177761fc
Because you have to detect the country/language. For the '' you redirect directly another route which wants this variables to match. In that example I don't see any redirect with given value
|
STACK_EXCHANGE
|
API pagination with external or centralised authorization
I am building a REST API which would power a front end as well as other 3rd party apps and hence I want it to be as "standard" as possible. Right now, I am trying to stick to HATEOAS. The only place I am struggling is pagination.
The authorization layer of our application is centralised. Multiple other apps use the centralised auth service and so my API needs to use the same. This gives rise to obvious problems in pagination, namely:
How to restrict the number of entries to a required number?
How to return entries of any valid page?
How to return the total number of pages?
Right now, I am using an ad-hoc solution that fetches all records from the database, according to the API filters, and then the authorization layer, filters the unauthorized records and then another layer (lets call it the "Pagination Layer") filters according to the page parameters.
This works for now as our dataset is relatively less but I don't think this will scale well. What are my options?
P.S.
There are a few things I have thought about but have no idea how good of an idea they are:
The frontend can be switched into a lazy loading mechanism so that returning exactly a certain number of entries is no longer mandatory. The frontend is take care of querying the next page if required. But, this will give a bad experience to 3rd party devs using the API.
The business layer gets only the number of records as in the page and the pagination layer decides if more queries are required to get more data. This looks like a bad idea in many ways as this won't solve getting a particular page.
For an efficient solution you need to be able to put the authorization and pagination constraints into the database query, and have the proper indexes for those aspects. Anything else will potentially overfetch an enormous amount of data. How big an issue this is depends entirely on the scale and characteristics of your data.
Can you translate the information you get from the authorization layer into a filter on the database query? Ideally something like getting the information "User A can access Projects X,Y and Z" and translate that into filters on your query.
If you cannot do that and have to pass every single result to it to know whether it is visible, you will always have some pathologically slow scenarios. For example if you have 1 million items, and your current user is allowed to view 10 of them, you might have to push the entire million items through the authorization layer just to get 10 results. How big of a problem this is depends heavily on the specifics of your application.
If you cannot push all these concerns to the datbase, which I assume is the case from your description, I think something like your solution 2 is the only reasonable way to handle this. You essentially need an internal pagination layer that fetches a bunch of results, passes them through the authorization layer and provides them to the rest of your application. Your externally-visible pagination layer then needs to request internally pages results until it has enough to fulfill the request.
This has the issue I mentioned above with potentially pathological queries in terms of performance, but I see no way to avoid that with these restrictions. There is also no fundamental issue with querying specific pages in this way, it's just expensive as you have to also query all pages before. But that is a general issue with pagination unless you can use advanced methods like keyset pagination.
If it is possible, you can also simply avoid providing the option to query specific pages. So you'd only provide a "next" link in each paginated response. This gives you the largest flexibility in designing your pagination, but obviously restricts what the client can do.
|
STACK_EXCHANGE
|
Blast Shower Is Really Good, Actually
No, really. Try this build, try it now.
Blast shower works extremely well with bands. By removing negative effects on the player, using the shower immediately recharges Kjaro’s/Runald’s/Singularity bands, as well as Safer Spaces.
With a few Fuel Cells, you can get tons of damage on demand, having a Soulbound Catalyst and Gesture of the Drowned makes it so that every attack that deals more than 400% damage will proc the bands, wiping the stage in the process.
This is a step-by step guide with Command, but it’s probably possible to build something simmilar without it.
1. Get the bands. This will be your main source of damage, so it’s worth it to get the bands as soon as possible.
1a. Get 400% damage source. If you’re playing MUL-T, Railgunner, or Artificer, you’re sorted right away. Otherwise, Shurikens should do the trick.
2. Stack Fuel. To eventually have bands on every kill, you need at least 5 cells, to reduce the cooldown below 4s. Stacking more would be even better, especially if you run Safer Spaces with it. At this point, you can manually grant yourself bands on attack, wich is pretty strong already, but gets a lot better.
3. Make it perpetual. Get the Gesture and the Soulbound Catalyst. Gesture will activate the shower as soon as it’s ready, and the Catalyst will charge it with every kill. With enough Runald’s and some Crowbars it should be easy to kill everything in one shot, immediately resetting the bands for the next one.
4. Make it chain. Going further, you should get items, that can proc bands on their own. Here’s the list of stuff that will work:
- AtG Missile: having just 2 will deal 600% base damage, which is enough to proc the bands. Throwing in some Clovers and/or a pocket ICBM will make wiping the level a lot easier
- Caggers: It will take 3 daggers to proc bands. It’s more than worth it to stack this many, so trade for them in lunar store, go to void, and/or stack a bunch of shipping request forms.
- Little Disciple: it takes 2 to proc. Has minimal utility, but now you can just sprint past enemies to initiate the stage wipe.
5. Congratulations! You have oficially won the videogame. Now go stack gasoline till it wipes everything in a kilometer radius around you, and go kill another planet.
P.S. With this build, there isn’t much sense to gather effect-on-hit items. Bleeds, stuns, bombs, scythes, baubles, are all basically irrelevant. This means you can’t really heal off your enemies other than through Monster Tooth and Desk Plant, so it’s probably worth it to grab some of those.
|
OPCFW_CODE
|
A couple of weeks ago a new case exploded around Azure virtual machines (Azure VM), and on-premises as well, and specifically those Linux with Open Management Infrastructures on board. In deep there are three Elevation of Privilege (EoP) vulnerabilities (CVE-2021-38645, CVE-2021-38649, CVE-2021-38648) and one unauthenticated Remote Code Execution (RCE) vulnerability (CVE-2021-38647).
Open Management Infrastructure (OMI) is an open-source Web-Based Enterprise Management (WBEM) implementation for managing Linux and UNIX systems. Several Azure Virtual Machine (VM) management extensions use this framework to orchestrate configuration management and log collection on Linux VMs.
Before creating the panic, there are three scenarios that can lead to compromise:
- Public port of ports 1270, 5986, 5985
- OMI agent lower than v1.6.8-1
- Using SCOM, Azure Automation or Azure Desired State Configuration
If none of these conditions are met, then you don’t have to do anything for your virtual machines.
In a nutshell, anyone with access to an endpoint running a vulnerable version (less than 18.104.22.168) of the OMI agent can execute arbitrary commands over an HTTP request without an authorization header. The expected behavior would be a 401 unauthorized response. However, the user is able to execute commands with root privileges.
To defend yourself against this, it is necessary to respect a series of rules:
- Update the OMI agent
- Update SCOM Management Pack
- Close any unnecessary doors
- Use the Network Security Groups
- Use Azure Defender and Azure Security Center to check machine compliance
- Use Azure Sentinel to check for machine compromise
Regarding the last point, the security team has published a series of queries and hunting rules to understand if your machine has been attacked or not – Hunting for OMI Vulnerability Exploitation with Azure Sentinel – Microsoft Tech Community.
Obviously, to execute the queries in detail, the Log Analytics agent must be present inside the machine and the logs must be captured.
More information about the problem can be found in this article – Additional Guidance Regarding OMI Vulnerabilities within Azure VM Management Extensions – Microsoft Security Response Center.
For more great blogs click here.
I’m founder and CEO at Inside Technologies, a company focused to drive into the future all the organizations thanks to power of Information Technology. Passionate about cultures that foster innovation and collaboration, I drive companies to fast turnaround of value to increase ROI. My motto is “There’s no more difference between small and large companies. Everyone needs to be available every day of the year!.
My experience includes leading and manage process and operations for different kind of projects. I had provided IT services for multiple organizations and transformed operational processes. Speaker and author, I collaborate side-by-side with the most important IT companies, like Microsoft, Veeam, Parallels, Netwrix, 5nine, to provide technical sessions, videos and articles for the technical users.
As member of Inside Technologies, I heavily collaborating with the most important software house in several different programs such as Microsoft Azure Advisor and various Preview Programs, like Windows Admin Center.
I really believe into knowledge sharing and this is the reason why I’m Community Lead of WindowServer.it since 2006, speaker during public conferences, moreover than the speaker in many conferences organizer of Server Infrastructure Days (SID), one of the most important conference for IT Pro Business in Italy.
Since 2012 I’m Microsoft MVP for Cloud and Datacenter Management and Very Important Parallels Person since 2016.
Di Benedetto, S. (2021). A VULNERABILITIES WITHIN AZURE VM MANAGEMENT EXTENSIONS. Available at: https://www.silviodibenedetto.com/omigod-a-vulnerabilities-within-azure-vm-management-extensions/
|
OPCFW_CODE
|
Multiple RSS feeds
It would be very useful if jQuery-RSS supported multiple RSS sources, ie $.rss(["http://feed1.com/rss","https://feed2.org/rss/feed.xml"], {...}.
It would be very useful if jQuery-RSS supported pagination very usefull ,,,
Zundrium
its not working multiple RSS sources ....
@jayanta119 it seems to work for me. See http://jsfiddle.net/jhfrench/ffmypq9r/2/
thanx for sending me a exampl i working @jhfrench
@jhfrench If you'd send me a PR I could potentially add this functionality to the lib.
@sdepold : I will--but I can't take credit for it. It's @jayanta119's code, even if I lint it and submit it for him.
Ah i see :) anyways a pull request would be welcome
@jhfrench : Go ahead mate! It's all yours :+1:
I have incorporated Zundrium's enhancements to support multiple RSS sources, but cannot test due to "moment" dependency introduced by Ross Dallaire Jan 5, 2015; see JS console error at http://jsfiddle.net/jhfrench/ffmypq9r/3/
Included moment library to get past that dependency, now getting "Uncaught TypeError: Cannot read property 'onData' of undefined". See http://jsfiddle.net/jhfrench/ffmypq9r/4/
Got past that error. I'm sure you're happy I'm clogging up your issue feed with my blow-by-blow.
Meanwhile, seems to be working: http://jsfiddle.net/jhfrench/ffmypq9r/5/
For what other use cases should I test it?
@sdepold do you have a standard set of use-cases I can use to regression test?
@sdepold do you have a standard set of use-cases I can use to regression test?
Not really. There are tests in the project that should pass. Other than then I don't have any specific use cases in mind
Ok...so the next step is for me to issue a PR?
Yessir
Jeromy French<EMAIL_ADDRESS>schrieb am Mo., 1. Juni 2015 um
15:47:
Ok...so the next step is for me to issue a PR?
—
Reply to this email directly or view it on GitHub
https://github.com/sdepold/jquery-rss/issues/23#issuecomment-107486834.
Hi guys!
I need this feature too... when do you think you'll able to release it (via bower)?
Thanks again!
The ability to demand-load additional items becomes much more important once you load multiple feeds at once. If you want a very easy way to merge your feeds, just use http://www.rssmix.com/
This will create a custom mash-up of the feeds you select and automatically does its own caching and such. Very cool and easy. Limit 1 request per second!
However, like most RSS feeds, I would suggest that a demand-load system only fetch the RSS.XML once and then adjust the output HTML accordingly (see my comment on the offsetStart/End bug). If you constantly fetch the xml file then you may be cut off from sources that don't want you slamming the server every time someone slides a scroll wheel on your site! Which is likely what would happen if you used offsetStart/End to demand-load pieces of a feed and then concatenated them with jQuery after the load.
This just landed in vanilla-rss: https://github.com/sdepold/vanilla-rss#multiple-feed-urls
Will update jquery-rss later today.
Just released v4.2.0 of jquery-rss and 1.3.0 of vanilla-rss that support multiple feeds
v4.3.0 is now also supporting ordering of the feeds in case you need this:
$("#rss-feeds").rss([
"https://www.contentful.com/blog/feed.xml",
"http://www.ebaytechblog.com/feed/"
], {
order: 'publishedDate'
})
|
GITHUB_ARCHIVE
|
Anticipated Deployment Dates
ESR Staging: September 4, 2019
ESR Production Release: October 2, 2019 - December 1, 2019
Please note that comments on this thread are not formally tracked. For help requests and issue reports please open a separate conversation or reach out via the Looker Help Center to start a specific conversation.
In addition to general tweaks and enhancements, this release comes with new and improved features in the following categories. Read on for more detail.
- IDE Folders available for content organization in a project
- Help page for internal resources
- New user email configuration option
- Session settings now offer an inactivity logout capability
- Localization of number formatting
- Improved the rendering of inline visualizations in schedules
- New embed events
- Table-Next now has inline visualizations and global text formatting
- Alerts support custom titles, subscriptions, and table calculations
Preparing for Release
Please take notice of items marked with a as they indicate changes to existing functionality and may require your attention. For more information see the Legacy Feature Updates and Features by Section below.
- Completely removed the “Legacy Rendering” Legacy Feature
- Deprecated support for Microsoft SQL Server 2005, XtremeData, and partially deprecated Aster
- Changed connection string formats for Oracle ADWC to use the TNS alias as the host name
- Upgraded drivers for several database dialects
- Run Schedule as Recipient is out of Labs
- Easy to Read Email Images is out of Labs
System Configuration Notices
Self-hosted customers should take note of items marked with a as they indicate changes to system configurations that may impact your ability to run the new release.
Beta and Experimental Labs Features
The following new and improved experimental and beta features are marked with a :
Allows significantly better organization in LookML projects. The feature is meaningful to Git and therefore changes requires commit/merge. In addition, Include statements need to capture the file’s path. Opt-in via project settings for existing projects! Learn more
Internal Help Resources
Admins will now have the ability to use a markdown file to configure a list of Looker help resources that are available at their company. Once configured, users will be able to access that list for the
Helpdropdown menu in the platform. Learn more.
Content Curation Beta (Boards)
Available in the Labs section of Looker, this solution allows people to easily organize dashboards and Looks for a team or initiative and provide guidance with markdown links. This solution works in tandem with folders (Spaces). It allows users to organize content for a temporary or permanent team or initiative without moving the underlying content, which is stored in folders. Learn more
Alerts Labs Beta
Create alerts directly from a dashboard tile. Ability to set up threshold-based conditions (greater than, less than, changes by, increases, etc) and receive notifications via email when a condition is met. Learn more
Features by Section
Dashboards, Visualizations, and Explore
Table-Next (Labs Beta).
- Ability to control global text formatting. Including row font size, header font size, header text color, header background color, and header alignment. Learn more.
- Ability to wrap or truncate text. Learn more.
- Conditional formatting. Learn more.
- Ability to transpose tables. Learn more.
- Ability to display visualizations within the cells of a table. Learn more.
- Alerts (Labs Beta).
- Completely removed the “Legacy Rendering” Legacy Feature.
Platform and Administration
Localization of Number Formatting. Ability to implement number formatting by setting the user attribute,
number_formatof a user to one of the available number formats. The formatting is not respected on the x-axis of some visualizations. Learn more.
explore:readyevents to indicate when both the explore and query have loaded. Learn more.
- Added a
statusresult to dashboard embed events to indicate whether a tile produces an error.
- Dependencies JAR. Certain Java class files are now distributed separately; resulting in two distinct JAR files that will need to be downloaded and installed to update your Looker instances. Learn more.
Content curation Improvements (Labs beta).
- Ability to view, scroll and sort all boards within a user’s organization.
- User personal folders now appear both in the
Browsemenu and on the left sidebar.
- Folders previously set as a ‘default folder’ will continue to appear in the left sidebar.
- Developers can set LookML dashboards to appear in the
Browsemenu and left sidebar.
- New user email configuration option: Allows admins to customize the body content via HTML of the welcome email new users receive to activate their Looker account. Learn more.
- Inactivity Logout Session Setting. Session settings can be modified to force users to be logged out of a session after 15 minutes of inactivity. Activity is defined as a user clicking anywhere in Looker or touching the screen in the case of touchscreens, or typing anything into Looker. Learn more
- Timeout session. The session timeout warning will now be displayed two minutes before timing out.
LookML and Development
Render Liquid HTML in the
descriptionparameter. Developers have the ability to implement Liquid HTML in the
descriptionparameter of fields. Learn more.
Importing remote projects is no longer in Labs.
- The generalized “Project Import” feature is no longer in Labs.
- Importing locally maintained projects is still in Labs under “Local Project Import”.
- Introduced a method to preserve number formatting, regardless of what locale-based settings a user may have. Learn more.
Scheduling and Downloads
- Run Schedule as Recipient is out of Labs Beta. This feature is now fully available on all instances.
- Easy to Read Email Images is out of Labs Beta. This feature is now fully available on all instances.
- Snowflake. Added support for External Table database objects to be scanned for database metadata information. This will surface external tables in SQL Runner and in LookML View generation.
- Qubole Dialect. Qubole Quantum. The Qubole Presto dialect Qubole Presto Service has been rebranded and is now available as a separate dialect in the dropdown menu as Qubole Quantum.
- Aster Data. Partially deprecated Aster Data. It will no longer be selectable as a dialect in the Dialect dropdown in the Connection panel. Existing Aster Data connections will continue to work.
- Microsoft SQL Server 2005. Removed support for MS SQL 2005.
- XtremeData. Removed support for XtremeData.
- Qubole Presto and Qubole Quantum. Upgraded the JDBC driver to version 2.0.2 to fix intermittent NullPointerException errors. Learn more.
- Oracle and Oracle ADWC. Upgraded the Oracle JDBC Thin Driver to version 18.3.
Only Oracle ADWC. Changed JDBC connection string format for Oracle ADWC connections to use the TNS alias as the hostname. In addition, the TNS_ADMIN JDBC parameter is used to connect with Oracle Wallets.
- Druid. Updated the JDBC driver to version 1.15.0.
- Upgraded Snowflake driver (v3.8.4). Addresses Snowflake issues caused by prior versions of the driver. Learn more
Upgraded Athena driver (v2.0.7). Allows defining
Workgroup=WorkgroupNameparameter in the Additional JDBC Parameters field of a database connection in order to utilize AWS Workgroups .
General Tweaks and Bug Fixes
- Fixed an issue where a Look or dashboard tile’s default filter values could be applied instead of a send or schedule’s filter values. This could only occur if the filter values came from a view that has
v_as the first two characters in its name.
- Fixed an issue that resulted in User Attribute values set to hidden were printed to server logs.
Scheduling and Downloads
- All formatting types in streamed results were being ignored. Formats will now be respected. Note: JSON detail format will continue to ignore formats as it is its expected behavior.
- Fixed a scheduler issue that prevented users from running a specific look both when in development mode and when that user did not have development permission to the model the content was based on.
- Fixed an issue where schedules failed when a user was in development mode and did not have developer access to the model, even if data should return outside of development mode.
- Fixed an issue where the schedule dispatcher would not run on the master node if it had scheduler threads set to zero, even if other nodes had scheduler threads.
- Fixed an issue where Looker was starting with a default of three unlimited execute scheduler threads even though this should be one.
- Looker now retries schedules and actions if one fails due to multiple events firing simultaneously.
- Addressed an issue that was causing look and dashboards rendering to fail.
- Improved rendering of tables with more than 100 rows.
- Improved the rendering of inline visualizations in schedules.
Dashboards, Visualizations, and Explore
- Addressed the ability for the x-axis to respect number localization.
- Addressed number formats mismatching between different viz types.
- Addressed an issue that was resulting in drills displaying no results when null values were set to a type of string.
LookML and Development
- Addressed an issue where
value_formatparameters were not being respected in the field being extended. It was taking the parameter definition of the view the field was extended from.
- Improved an issue where utilizing non-alphanumeric characters upon configuring a new model that led to a 500 error.
- Modified how the datatype parameter of type date interacted with a measure within the project IDE.
- Addressed the inconsistent alignment of git metadata displayed in the project settings page.
- Addressed LookML Validation Error
Uncaught TypeError: Cannot read property '1' of null
- Addressed an issue in which a persisted explore with the parameter
sql_always_wheredefined using user attributes was pulling incorrect data from the cache.
Platform and Administration
- Added clarifying text to the Sessions admin panel. No behaviors changed.
- Addressed an issue where editing the user attribute for a group did not allow them to enter a string. Instead, it only displayed a dropdown of options.
- Improved the ability to filter out Looker Employees from the User Activity dashboard under System Activity.
- Improved the tracking of user and sudo details in events.
- Addressed an issue where a user could bypass the account lockout by using a capital letter in the email.
- Addressed an issue where the all_lookml_models() and lookml_model(lookml_model_name) API endpoints would return an empty array with explores that required access grants.
- Google Cloud Spanner. Fixed an issue where the JDBC driver hit a Java instantiation exception.
Netezza. SQL Runner will now populate database objects from
INFORMATION_SCHEMA.VIEWSin addition to
INFORMATION_SCHEMA.TABLESin the Schema and Tables sidebar.
- Snowflake. Fixed an issue where the SQL Runner Describe command did not scope Snowflake tables to the schema, causing describe to fail for tables outside of the default schema.
- Addressed a Snowflake JDBC driver version 3.8.3 issue that would return a
NullPointerExceptionmessage when a queried result set was larger than could be held in one result chunk. Queries returned a blank error message to Looker.
- Fixed an issue where updating a Look filter via an iFrame message failed with an error response.
- Fixed an issue where failed schedules mentioned Looker despite settings being set to not reference Looker.
- Fixed an issue where the whitelabel favicon was displaying the Looker icon when downloading a PNG in the browser.
- Numeric filters can now be set on Looks via the embed messaging in an iframe.
- Fixed an issue where clicking “Explore from Here” when editing an embedded dashboard did not open in a new tab.
|
OPCFW_CODE
|
Preferences says Facebook Container "requires container tabs", but Facebook Container seems to be working
General Preferences seems to believe that I don't have container tabs enabled. It says that Facebook Container "requires container tabs" and suggests I disable it. (See f…
General Preferences seems to believe that I don't have container tabs enabled. It says that Facebook Container "requires container tabs" and suggests I disable it. (See first image). However, the add-on seems to be working - the tab has a black underline (see second image)
I only noticed this message today, so I have no idea how long it's been around. I've tried many things, including
- making sure Firefox is up-to-date - restarting Firefox (many times, including in Safe Mode) - removing and reinstalling Facebook Container - disabling all add-ons except Facebook Container - closing and re-opening Preferences after each and every change
I even installed a system update and re-started my computer.
I'd appreciate all and any help.
Add a column to the bookmark library that indicates the folder where the bookmark is located. For example, in the screen shot (bookmark1), for the last bookmark, have a …
Add a column to the bookmark library that indicates the folder where the bookmark is located. For example, in the screen shot (bookmark1), for the last bookmark, have a column that will show the location (NOT URL) of which folder(s) that bookmark is in. Based on the second screenshot (bookmark2), the folder is 'Home\Time Wasters\'. 'Home\Time Wasters\' would be in the 'Bookmark Folder' column, that would need to be added. See bookmark3.
Oracle Linux 7, FF 68.2 ESR, recent update to Flash 22.214.171.1243, I check in Addons that newest Flash is there (had issues a few weeks ago where I had to remove pluginreg.d…
Oracle Linux 7, FF 68.2 ESR, recent update to Flash 126.96.36.1993, I check in Addons that newest Flash is there (had issues a few weeks ago where I had to remove pluginreg.dat as 188.8.131.520 wouldn't get noticed and FF thought it was 184.108.40.206 still, complaining I had an out-of-date version), and surprise - both Flash and IcedTea aren't there.
So I confidently close FF, remove pluginreg.dat, fire FF up again - but no dice.
Check contents of pluginreg.dat, and here they are:
Generated File. Do not edit.
[HEADER] Version:0.19t:$ Arch:x86_64-gcc3:$
Funny, so I try downgrading Flash via YUM, which can't be done as Adobe promptly removes old versions from its repos. OK, so I just YUM erase Flash, only to achieve an INVALID IcedTea plugin. What?
Did I perhaps use FF 68.2 for 2 weeks without noticing the issue? Well if that's the case a downgrade to 68.1 will fix things, right? So I re-add via YUM the latest Flash plugin, but alas 68.1 still thinks that and IcedTea are INVALID.
Mumble. Well let me get the big hammer and further downgrade to FF 60.9, remove pluginreg.dat, fire FF up again and... oh. Still INVALID. Now I'm 100% sure there's something weird - but what is it?
Long story short, even creating an entirely new UNIX user and firing up FF which obviously created a new profile from scratch, it gets the same pluginreg.dat right off the bat with Flash and IcedTea plugins marked as INVALID.
So, what are the further debugging steps here as I guess I'm out of options?
I know this has been asked before. And solutions included clearing browser cookies, disabling add-ons, resetting the browser, & coping urls and pasting into address b…
I know this has been asked before. And solutions included clearing browser cookies, disabling add-ons, resetting the browser, & coping urls and pasting into address bar. The problem is Firefox, not corrupted files. I've completely uninstalled Firefox and deleted all firefox data, profiles, user folders, etc. from my computer. Then used a freshly downloaded installer to reinstall Firefox, with a completely clean profile and no add-ons. I still get this error. I've even tried this with Firefox ESR (Extended support release). This issue needs solved Mozilla! You need to learn to play nice. I hate having to dump my favorite browser, again. This is the third situation since I started using Firefox this type of situation has forced me to dump Firefox completely. Just when I get comfortable with Firefox, it goes back to not working and making users think they are doing something wrong, or the other company has issues. It is usually just Firefox that is broken.
With that realization, I'll have to go with something like Google Chrome, which I hate. At least a virus can be wiped from the hard drive. But Firefox issues are a lost cause.
Window 7 sp1 I checked '%LOCALAPPDATA%\Mozilla\updates' and the folder is EMPTY. I was on v67.0.x and it would have updated to the brand new v70. Tried installing v68.0.2…
Window 7 sp1 I checked '%LOCALAPPDATA%\Mozilla\updates' and the folder is EMPTY. I was on v67.0.x and it would have updated to the brand new v70. Tried installing v68.0.2I had the installer for and it opened in a new window without my favorites, saved logons, etc. I clicked on the firefox icon pinned to the lower taskbar and it opened v68.0.2 with all my favorites, logons, etc. Only seems to be one firefox application in Program Files. I Pasted 'about:profiles' into both and attached is the screen grab. Need to endup running v68.0.2 or v69 but NOT the just released v70. Why does an install or update always change my preferences to automatically install updates??
Mac OS X High Sierra 10.13.6 Here's the routine with FF on this machine: If FF 68.0.2 is running, after a while I will try to open a new tab and be told "Sorry we need to…
Mac OS X High Sierra 10.13.6
Here's the routine with FF on this machine: If FF 68.0.2 is running, after a while I will try to open a new tab and be told "Sorry we need to do one small thing to keep going, FF has been updated in the background, Restart FF" I do so. It opens up a new, clean profile in a new window (About FF now says 68.0.1 in this version of FF, and it downloads 68.0.2 and prompts to Restart FF (again)). At which point it runs "a new helper tool" and re-updates FF to 68.0.2. I then have to mess with about:profiles to go back to my profile. Things then run routinely for a few hours or a day, before the whole routine starts over again.
Alternately, if FF 68.0.2 is not running, if I open it (after opening and closing it several times without issue), I will get a message that "You've launched an older version of FF, Create New Profile or Quit". Create New Profile leads me through the routine above.
Versions of the above routine have happened not just between 68.0.2 and 68.0.1, but also between earlier version updates. That is, this has been going on for months. It does seem in the last couple of weeks that the interval of time has shortened and I'm having to do this routine everyday or even multiple times a day. I have no problems like this with FF on any other machine.
Trashing FF and re-dl'ing from Mozilla. No dice. Trashing FF, trashing Library/Cache/Mozilla and Firefox folders. No dice. Trashing FF, trashing Library/Application Support/Firefox/Profiles (and re-signing in to retrieve my existing bookmarks etc and re-setting up View and other Preferences). No dice.
If anyone has any ideas, I'll be grateful, thanks in advance.
|
OPCFW_CODE
|
Kubernetes: Cluster IPs are not accessible on new deployments
Is this a request for help?:
Is this an ISSUE or FEATURE REQUEST? (choose one): ISSUE
What version of acs-engine?: v0.18.9
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm) Kubernetes 1.11
What happened:
On a newly created Kubernetes deployments, the spawned Windows pods cannot access Cluster IPs. Since the Windows pods are configured to use the Kubernetes DNS server (<IP_ADDRESS>), DNS name lookups always fail, and the pods cannot access the Kubernetes API (<IP_ADDRESS>).
Restarting the Windows nodes solves this issue.
What you expected to happen:
The Kubernetes Cluster IPs should be accessible, and thus, DNS lookups should be resolvable.
How to reproduce it (as minimally and precisely as possible):
Create a Kubernetes deployment via acs-engine ( kubernetes.json: https://paste.ubuntu.com/p/8VrYhYkwvd/ )
After the deployment finishes, run:
kubectl create namespace test-windows-containers
kubectl create -f windows_pod.yaml ( https://paste.ubuntu.com/p/SrXDw92bqq/ )
Wait for the pod to start. Then run:
# DNS server should be set to <IP_ADDRESS>
kubectl exec -n test-windows-containers pod_name -- ipconfig /all
# this will timeout: https://paste.ubuntu.com/p/M2fzM4QVZq/
kubectl exec -n test-windows-containers pod_name -- nslookup kubernetes.default.svc.cluster.local
Restart the Windows nodes.
Run Step 3. again. This time, nslookup will succeed: https://paste.ubuntu.com/p/4SDXQkdHVg/
Anything else we need to know:
@dineshgovindasamy @daschott - how are the DNS fixes coming? have you synced with azure cni team on this yet?
The main issue here is the fact that the Kubernetes Cluster IPs are not accessible from the Windows pods. The DNS server not being accessible is a secondary effect of this problem.
@bclau My observations are:
Only the first pod scheduled for a given node has no connectivity to cluster IPs. Every subsequent pod that you schedule on the node will have service VIP and outbound connectivity, even without reboot (do you also see this?) A root cause has been identified by @dineshgovindasamy for this. I agree that this is separate from the issue where the DNS suffix isn't being set correctly.
I've created a new deployment to test this out. As expected, the 1st pod didn't have cluster IP connectivity. The 2nd one had. The subsequent 8-10 pods did not.
@bclau For me, subsequent pods do have cluster IP connectivity. Only the first one scheduled on a node doesn't. Which acs-engine version did you use to create your cluster?
I tested using https://github.com/Microsoft/SDN/blob/master/Kubernetes/WebServer.yaml
I used acs-engine 0.18.9. This issue with the first pod having no connectivity should be resolved once this workaround makes its way up to Azure-CNI.
#3037
@bclau Can you confirm the workaround @daschott is using works for you?
@bclau are you still seeing this issue?
|
GITHUB_ARCHIVE
|
One World Mathematics of INformation, Data, and Signals (1W-MINDS) Seminar
Given the impossibility of travel during the COVID-19 crisis the One World MINDS seminar was founded as an inter-institutional global online seminar aimed at giving researchers interested in mathematical data science, computational harmonic analysis, and related applications access to high quality talks. Talks are held on Thursdays at 2:30 PM EDT unless otherwise noted below.
Current Organizers (July 2021 - May 2022): Matthew Hirn (Principal Organizer, Michigan State University), Mark Iwen (Michigan State University), Felix Krahmer (Technische Universität München), Shuyang Ling (New York University Shanghai), Rayan Saab, (University of California, San Diego), Karin Schnass (University of Innsbruck), and Soledad Villar (Johns Hopkins University)
Founding Organizers (April 2020 - June 2021): Mark Iwen (Principal Organizer, Michigan State University), Bubacarr Bah (African Institute for Mathematical Sciences South Africa), Afonso Bandeira (ETH-Zurich), Matthew Hirn (Michigan State University), Felix Krahmer (Technische Universität München), Shuyang Ling (New York University Shanghai), Ursula Molter (Universidad de Buenos Aires), Deanna Needell (University of California, Los Angeles), Rayan Saab, (University of California, San Diego), and Rongrong Wang (Michigan State University)
For information on previous talks, videos, etc, visit our Past Talks page.
To sign up to receive email announcements about upcoming talks, click here.
The organizers would like to acknowledge support from the Michigan State University Department of Mathematics. Thank you.
June 17: Wenjing Liao (Georgia Tech)
Regression and doubly robust off-policy learning on low-dimensional manifolds by neural networks
Many data in real-world applications are in a high-dimensional space but exhibit low-dimensional structures. In mathematics, these data can be modeled as random samples on a low-dimensional manifold. Our goal is to estimate a target function or learn an optimal policy using neural networks. This talk is based on an efficient approximation theory of deep ReLU networks for functions supported on a low-dimensional manifold. We further establish the sample complexity for regression and off-policy learning with finite samples of data. When data are sampled on a low-dimensional manifold, the sample complexity crucially depends on the intrinsic dimension of the manifold instead of the ambient dimension of the data. These results demonstrate that deep neural networks are adaptive to low-dimensional geometric structures of data sets. This is a joint work with Minshuo Chen, Haoming Jiang, Liu Hao, Tuo Zhao at Georgia Institute of Technology.
June 24: Qiang Ye (University of Kentucky)
Batch Normalization and Preconditioning for Neural Network Training
Batch normalization (BN) is a popular and ubiquitous method in deep neural network training that has been shown to decrease training time and improve generalization performance. Despite its success, BN is not theoretically well understood. It is not suitable for use with very small mini-batch sizes or online learning. In this talk, we will review BN and present a preconditioning method called Batch Normalization Preconditioning (BNP) to accelerate neural network training. We will analyze the effects of mini-batch statistics of a hidden variable on the Hessian matrix of a loss function and propose a parameter transformation that is equivalent to normalizing the hidden variables to improve the conditioning of the Hessian. Compared with BN, one benefit of BNP is that it is not constrained on the mini-batch size and works in the online learning setting. We will present several experiments demonstrating competitiveness of BNP. Furthermore, we will discuss a connection to BN which provides theoretical insights on how BN improves training and how BN is applied to special architectures such as convolutional neural networks.
The talk is based on a joint work with Susanna Lange and Kyle Helfrich.
|
OPCFW_CODE
|
The deadline you give us – we are able to meet up with urgent deadlines, but Should you have for a longer period, allow us to know. The more time you may give us, the decreased your cost will probably be.
Hey Eva, I am seeking to plan a visit to Peru, don't just to vacation but will also to search for operate. Can it be possible to reach which has a vacationer visa, and immediately after locating operate change it into a resident staff visa?
You could be in almost any Section of the entire world and acquire your assignment finished by us. We will assist you to Should you be willing to spend to accomplish your assignments. Irrespective of whether you’re in Canada, Australia or any other place our assistance will abide by you and you may get the ideal rating on your assignments. It is possible to hand around your operate Along with the deadline you require it again and We're going to make sure that you receive it back immediately. There are various pupils that are satisfied with our work and we Ensure that each college student will get the ideal of us.
Whenever you place an order for someone to complete your assignment, you'll be matched with another person with the proper techniques for you. They can then begin writing the essay you need, with your input. You'll need Call the whole way via, so you may know what sort of essay you're finding.
I do think peru embassy is same corrupt as pakistani corrupt so that they hardly ever answer of any information as i send out Significantly email
In other circumstances, there are actually sophisticated homework duties that take lots of time to accomplish. Working for lengthy several hours is often tedious and even now cause a poor-good quality operate.
Take a phase again and think of the regions in which you may simply call yourself a specialist, irrespective of whether that’s your profession or simply a passionate interest you've got.
Fortunately, the third choice is often a practical a person that means you received’t find yourself falling out with a buddy and also you gained’t flunk the assignment. In truth, you might be offering yourself a positive fireplace technique for obtaining a superior grade.
I am not confident seriously. I personally imagine tutoring as a method to support pupils comprehend The subject at hand, not undertaking tier precise homework. To ensure that’s soothing concerning you and them.
Lots of homework support Web sites deliver enticing offers to consumers, but our organization understands that our shoppers hope the most effective and that's why we're dedicated to performing our best to present you with significant-top quality enable. From The instant you spot an purchase on our Web site, We'll make sure you to offer you the products and services value your cash.
For college kids, life is often difficult from time to time plus they normally question, “Who can write assignment for me?” The answer to this straightforward problem is BuyAssignmentService.com, as we give you Experienced writers who'll get absent all your anxieties and Be sure that you have the very best composed Essays that isn't only unique but in addition detail oriented.
Peru has no Particular visa for volunteers. Travelers planning to volunteer in Peru enter the state on the tourist visa and they are allowed to volunteer (with no payment) in a charitable Group or institution for a utmost of 183 times.
Philippine passport holder haven't got to apply for a tourist visa ahead of i thought about this coming to Peru. You find the evidence both on this webpage when opening the pdf doc "Nations around the world with Visa Obligations" (revealed through the Overseas Affairs Ministry) or on the web site of DIGEMIN, Peru's immigration Workplace beneath this link ("") (have a look at site three "Asia", underneath Filipinas you see "NO"; so no visa essential for the max. continue to be of 183 days).
I am an indian citizen and I had been just issued a 30 day visa for peru within the Santiago consulate in Chile. I'm travelling to Peru involving the 20th of December along with the 4th of January. I needed to confirm that the 30 times starts on enter to Peru and never from the working day of issue,because all my files place to me reaching peru only around the 20th of December.
|
OPCFW_CODE
|
The Journal Entry Detail folder provides access to the journal detail lines which constitute your journal entries. From this folder, you can view detail lines, drill back to Journal Entry Inquiry in Great Plains, and access the drilldown tool.
You can drill back from a journal detail record in ActivReporter to the Journal Entry Inquiry window in Dynamics-GP.
- In the Journal Detail HD view, highlight the journal detail record you want to research.
- In the toolbar, click and select Journal Entry Inquiry from the drop-down menu, or press Ctrl+J. The inquiry window opens with the requested journal entry record.
You can also access the Journal Entry Inquiry action from the Journal Entry and Journal Detail windows as well as the Journal Entries HD view.
The Drilldown Explorer lets you execute a financial function and view the result as well as the detail behind the calculation.
- In the Navigation pane, highlight the ActivReporter > Journal Entries > Journal Detail folder.
- Right-click the folder and select Drilldown Explorer from the shortcut menu.
- From the Function drop-down list, select the financial function to view details for. Select from among the following functions:
- Begin Balance
- Year to Date
- Credit Activity
- Credit Balance
- Credit Year to Date
- Debit Activity
- Debit Balance
- Debit Year to Date
- In the Calendar Period field, click the field to select the accounting period for which to view journal detail. The popup opens where you can specify an explicit calendar period to view detail for or you can enter or select a relative period expression. An asterisk (*) after the period name indicates a relative period expression is applied.
- If you need to view journal detail as of a date other than the last day of the selected calendar period, enter the date in the As of field. Otherwise, the date defaults to the last day of the selected period. If specified, the as of date must fall between the begin and end dates of the calendar period.
- If you want to include unmerged transactions in the results, mark the Include Unmerged checkbox.
- If you want to limit the detail based on an items expression, click to open the dialog box where you can build an expression by selecting items and specifying constraints.
Click Calculate. The journal detail records which match your criteria load in the view.
You can view the journal entry associated with a detail record from the Journal Detail window.
- Open the journal detail record.
- In the toolbar, click . The associated journal entry opens.
You can filter the Journal Detail HD view by journal type. To do so, select a journal type from the Filters drop-down list. Valid journal types are:
- Financials Journal
- Inventory Journal
- Payroll Journal
- Project Journal
- Purchasing Journal
- Sales Journal
Each journal type is identified by the value in the custom Series field on the related journal entry. The values and the associated journal types are:
- 2 = Financials Journal
- 3 = Sales Journal
- 4 = Purchasing Journal
- 5 = Inventory Journal
- 6 = Payroll Journal
- 7 = Project Journal
A "Budget Entries" filter is also available to show journal detail entries with a System-Journal Source of "GP-Budget".
|
OPCFW_CODE
|
HT02 - Test Methodology for NFV
This Hot Topic has the following goals:
- Validate and improve the Test Methodology guidelines and templates being described in TST001 and TST002.
- Compile a collection of examples of the application of such guidelines and templates in the context in NFV PoCs : Systems Under Test configurations, Test Descriptions, etc,
These goals may be met in a PoC whose focus is testing, or as additional goals for a PoC primarily addressing another topic.
What is expected to be learnt from the NFV PoCs
Points to prove/refute
• TST001- Pre deployment validation (http://docbox.etsi.org/ISG/NFV/TST/70-DRAFT/TST001/) a. validate and suggest improvements to SUT configurations b. validate and suggest improvements for the Test Description template c. provide descriptions (following the Test Description template) of pre-deployment scenarios successfully tested by the PoC Team in a reproducible way • TST002 – Interoperability Test Methodology (http://docbox.etsi.org/ISG/NFV/TST/70-DRAFT/TST002/) a. validate and suggest improvements for the Test Description template (section 4) b. identify the functions under test in the context of the PoC c. provide the list of interoperability features tested by the PoC, and the impacted entities (see http://nfvprivatewiki.etsi.org/index.php?title=IOP_Methodology) d. Provide SUT configurations tested by the PoC Team following the guidelines in section 4. Identify the test interfaces (where actions are triggered and observed) e. provide descriptions (following the Test Description template) of interoperability features successfully tested by the PoC Team in a reproducible way
Criterion of success
• Provide input and concrete examples to at least one of the 2 WIs (TST001 or TST002)
Provide feedback to other NFV Working Groups on the items that have been validated by the testing and/or the gaps identified in the specifications.
Technical information to be provided by the PoC Team
• TST001 a. SUT configurations (as exercised by the POC) b. Pre-deployment validation Test Descriptions (run by the POC, following TD template) • TST002 a. SUT configurations (as exercised by the POC) b. List of NFV IOP Features tested by the POC project c. IOP Test Descriptions (run by the POC, following TD template) • Lessons learnt and suggestions for improvements in the methodologies • A feedback template is provided for each WI (attached)
PoC Teams shall follow the HT#1 Feedback Template on their contributions to Hot Topic#1
• TST WG – TST001, • TST WG – TST002
NFV TST vice-chair: Marie-Paule Odini, HP firstname.lastname@example.org
Contributions (feedback) deadline
|
OPCFW_CODE
|
Support --("hyphenated-name") syntax for defining long arg names in clap_app!
See #321
If the syntax style ("string") is acceptable, it could be done for the other items that #321 asks long names for.
Coverage decreased (-0.01%) to 91.241% when pulling 982da69051e77c6f84217269fc54c7fb04f4474a on Arnavion:long-names into a7659ce4f0c4f53d58c5c92f25f8b2109cab9aab on kbknapp:master.
@Arnavion thanks for taking the initiative on this! While I'm not against ("string") syntax, I'm also not in love with it. I'd like to see other options as well before making the final call. Things such as name_with_hyphen -> name-with-hyphen or even --"some-option".
Now that clap_app! is stabilized, we need to be extremely careful about breaking any existing code, especially since habitat-sh/habitat is using the macros.
If there's a way we can feature gate just a particular portion, I'm ok doing that as well.
It's been a long time since I've looked at this part, so I'll need to re-familiarize myself with it.
cc @james-darkfox
--"some-option" doesn't work because an expr can't precede a tt* standalone. The () wrapper makes it possible.
name_with_hyphen -> name-with-hyphen might work with Macros 1.1.
I don't believe this could break any existing code, since there's nothing that would match (expr) before this change.
Ok interesting. I haven't had time to dig into Macros 1.1 yet, but there's a few ideas I have floating around in the back of my head for them.
I didn't mean to insinuate that this is breaking any code, just making the point that it's stabilized now I wan to be sure what we land on is worth sticking with since we can't break it anymore (unless we feature gate particular parts, which I'm ok with). :smile:
It doesn't seem possible to have a #[cfg] on individual macro branches. The macro_rules! macro doesn't allow it.
No I was thinking more along the lines of calling out to some other internal macro that does the split vice just two arms...having said that I hadn't even looked into if that was possible or not :stuck_out_tongue_winking_eye:
I'll try to look into the some_arg->some-arg here shortly. Once I have a good answer on that I think we'll have enough to say yes/no to a particular syntax.
@Arnavion sorry this has taken so long :(
After some looking and thought, I've determined that I'm good with this syntax because of this edge case. Would you like to try and add #759 to this PR and just rebase onto master? One last thing I'd like to add if you're up for it is to add a test for the macros, to ensure these two additions don't break any existing code since they're heavily used by things like habitat you can just copy/paste the macro out of the benches/03_complex.rs and stick it into a test and I'm good with that.
I'll be good with a merge at that point.
I will try to get to it coming weekend.
Sorry, I accidentally deleted my fork at some point, so I can't push to this PR any more. I've opened #776
|
GITHUB_ARCHIVE
|
which software to use for cleaning up files and increase performance of a computer under ubuntu?
Possible Duplicate:
Is system cleanup/optimization needed
i am looking for a sofware that would clean up registry , temporary and log files on ubuntu.
Is there an equivalent to the windows CCleaner freeware?
cheers
You should specify the problem you try to solve. There is no 'registry' to clean up for instance. If you computer is slow, you might need to uninstall stuff, but an automatic program will only get you so far with that. If you need more space you might want to check log and temp files, but if you don't, they're not really slowing you down much.
"registry , temporary and log files" if they exist they do -not- make your system slow.
Please be easy on new users. He never stated unnecessary files are making his computer slow.
To the people who are voting to delete this question: Maybe consider asking for a merge instead? Not only does this have valuable answers, but duplicate questions are considered often to serve an important purpose. While I'm pleased people are voting to delete old, off-topic/unclear posts with no useful information in them, I'm worried that some recently active delete voters may not be familiar with the reason most duplicates are not manually deleted. (Actually I think nothing needs to happen here, but a merge might be acceptable.)
Usually, there is no need of a cleaning software to speed up an Ubuntu machine. There is no registry in Linux and temp files are on /tmp directory which is not saved when you shutdown the machine.
Log files are on /var/log and have absolutely nothing to do with the speed of the machine (while they do not fill up the disk of course). Each log has several files because the old ones are compressed and a new one is created. The compressed (old) ones are usually in the form /var/log/logfilename.#.gz. Usually 2 files of each log are not compressed, the current one and the one before it (usually named logfilename.1). It is not needed and won't speed up your machine but if you need some small extra disk space, you can delete all the compressed ones if you don't need them (sudo rm /var/log/*.gz should work but you may need to do some adaptations depending on the names of your log files).
A few little things you can do manually:
Due to the package management automatic dependencies installations, you may have auto installed package that are not needed anymore. Remove them with: sudo apt-get autoremove
Look at your swap to see if it is used heavily (not enough RAM):
free -m
The last line will tell you swap size, used and free. If use is high and you see on the above line -/+ buffers/cache that free memory is short, you may need to increase your RAM to speed up your machine. If swap is near 100% used, you need to increase swap space (but that is another question :) )
Browsers caches usually can be cleaned from the browser preferences.
Or as suggested in other answers, you can use Ubuntu Tweak! :)
You can use http://ubuntu-tweak.com/
It let you clean cache from browsers, packages no longer needed, old kernel, and many more.
Oh, it seems we were typing at the same time and you came first :)
Why isn't Ubuntu-tweak in the official repos?
I don't think there's such thing as a registry in Ubuntu. You have a package index, package and config files, cookies, etc.
To clean up Ubuntu, you can use Ubuntu Tweak
Also, take a look at these tips
Hope it helps.
|
STACK_EXCHANGE
|
Speebs wrote:I'm not clear on 2 things though:
1) What were my factual misunderstandings? (not really important, just curious)
I was referring to these points in your initial comments
Speebs wrote:... the largest percentage of users (by a long shot) are running at 1024?
Speebs wrote:... until pixel density/clarity increases
On Woot, the largest percentage of users are not running 1024, even with the sprawl of resolutions beyond it. And by my read of the majority of 1024 width monitors versus the majority of 1280 or greater monitors, pixel density has increased to such degree that our webpage is more physically narrow on them than our 960 wide Woot 2.0 design was on a 1024 screen.
Speebs wrote:2) It sounds like you are blaming those who are annoyed by the side-scrolling for making assumptions about woot.com.
I've stated often that this design is annoying for those running 1024 and that nothing I can say will resolve that annoyance. All I can do is be transparant on our decision including the need we have for a wider design and the data it is based on.
It's largely irrelevant what woot's intentions are. If in 1 year everyone can fit 1080 pixels onto a screen with no scrolling, and you say "see, I told you so," that doesn't make it any less aggravating or more acceptable for those who have to side-scroll now. Why impose it before the technology is ready? What purpose does it serve, other than absolving you of the responsibility of figuring out a better layout for most people?
I've attempted to establish our need earlier with regards to columns and content we desire on the front page. I could probably do a lot better job, but the eventual addition of content modules on the front page will be the best proof. The voice of users satisfied with our decision and enjoying our additional front page content is not going to be heard here, or likely in any of our forums. 90% of our audience doesn't post, a good portion of these members view our front page content only.
I've attempted to share our demographic data, and compare it to what's reported as the norm. I would offer that we're expanding beyond 1024 at approximately the same data point many sites expanded beyond 640x480 and 800x600.
EDIT: Also, for the record, I run at 1280x1024 both at work and at home. I don't have any scrolling issues, and don't mind the new color scheme at all. What confuses and irks me a little bit is the decision to alienate customers/fans without what I would consider good justification.
Fair enough. Often times we see the most reaction on display from those with overactive empathy, but your concern is still valid and even a refreshing perspective here in this thread. Do we value an improvement to the Woot experience of the majority at the discomfort of the minority? Maybe. We definitely value the objectives of some new page elements (like the discussion module to pull people into the community more) If the minority is shrinking at over 2% a month and accelerating in it's decline, that's also a consideration. Designing the page so that the 1024 window can perform most necessary navigation was a definite consideration.
The only niggles I have with the new layout is the annoying ads, and the fact that at least 50% of the time I go to www.woot.com (either directly or by clicking on "today's woot", the CSS is entirely broken. I have to manually refresh for it to work again.
I've covered previously that there are less ads on woot 3.0, but easily agree that the top center one is in your face more. This makes the price of woot items cheaper.
CSS issues we definitely need to address and fix. There were a few right launch and there likely are on people's first visit with old css in their cache. Are you using Safari by chance? (I heard of an isolated issue there when combined with some third party stuff) Let us know any relevant specifics so we can track down and fix.
|
OPCFW_CODE
|
Showing $\sum\limits_{n=1}^{\infty} \frac{(-1)^n n^3}{(n^2 + 1)^{4/3}}$ diverges
We have the series $\sum\limits_{n=1}^{\infty} \frac{(-1)^n n^3}{(n^2 + 1)^{4/3}}$. I know that it diverges, but I'm having some difficulty showing this. The most intuitive argument is perhaps that the absolute value of the series behaves much like $\frac{n^3}{\left(n^2\right)^{4/3}} = \frac{1}{n^{-1/3}}$, which diverges, though this doesn't seem like it would disprove thee fact that we could be dealing with a conditionally-convergent series. The computation of the limit, even of the absolute value of the general term, also seems nearly impossible to do by hand, as successive applications of L'Hospital's Rule seem to produce a result just as disorderly as what I started with. Limit comparison also doesn't quite seem to work, especially with the alternating-factor.
Thanks in advance for any insights on this.
Your reasoning is fairly sound, but you are thinking about it a little too hard. Try, instead of thinking about this as a Series and trying to get to a p-series test; do a test for divergence (or nth term test depending on your book) to see if the term itself goes to zero with $n$.
Edit:
Sorry missed the last part of your question, I thought you were trying to compute the series by hand.
Notice that $a_n \rightarrow 0$ iff $|a_n| \rightarrow 0$ for this sequence. A couple applications of L'Hospitals should get you to something that you can determine the convergence/divergence of I would think... (in fact, 1 application and then simplifying, should get you to an end result I believe).
Alternatively, for those that don't like L'Hospital's rule, you can divide the top and bottom by $x^3$. It will help to write the bottom $x^3$ as $(x^9)^{\frac{1}{3}}$ (Edit: Reading is gud. Fixed $x^2$ to $x^3$)
By working this out in the way you suggested, I get $\lim\limits_{n \to \infty} \frac{1}{\left(\frac{\left(n^2 + 1\right)^4}{n^9}\right)^{1/3}}$. Is the argument from here that the highest degree term in the expansion of $(n^2 + 1)^4$ is $n^8$, so with an $n^9$ in the denominator, the entire fraction in the denominator will surely go to $0$? This seems to make sense, but I worry I'm "waving my hands" (as many of my math professors would say) a bit too much here.
Yep, that's correct. If you really want to do it full out you would fully expand the $(n^2+1)^4$ and then split it up term by term so that each term is over the $n^9$ term, yielding $P(x) = \frac{n^8}{n^9} + \frac{4n^6}{n^9} + \frac{6n^4}{n^9} + \frac{4n^2}{n^9} + \frac{1}{n^9}$ Then simplify to show that $P(x) \rightarrow 0$ and you're done.
Excellent. Thank you!
Happy to help. If you liked this answer, please mark it as the answer so others will know it's concluded :)
How would I do that? I'd be glad to. I voted it upward, though I'm not sure if there's a different marking I could make.
There should be a mark right next to the vote up/down that looks like a check mark. Click that to mark it as the accepted answer.
Note that
$$ \left|\frac{(-1)^n n^3}{(n^2 + 1)^{4/3}}\right|\sim {n^\frac13}\to \infty$$
thus the given series diverges since each terms in the limit diverges.
Recall indeed for convergence we always need that, as necessary condition, that $|a_n|\to 0$.
|
STACK_EXCHANGE
|
Building an Android APK by hand, external Jar's resource files
I'm building an apk manually with aapt, dx etc.
I have some external jar files, I'm using LibGDX, gdx includes some resource files in its classpath (in the jar file itself) when i compile the code, dx only processes the .class files and discards resources (for example gdx has a default font at com/badlogic/gdx/utils/arial-15.png in the jar file) I'm wondering how do i get to bundle those resource files too, i extracted an apk of another game made with gdx and seems like those files are placed in the root of the apk like apkroot/com/badlogic/gdx/.. how does IDEs do that, do i have to manually extract the jar files and get the resources?
Build a simple system that can use functions to fetch files with Json or what not and use a simple filter to sort the data you puppy brings home into your choice of structure for the projec?
It's been a while and no one answered me, until then i kept experimenting with various stuff, and i finally found a way that is not extracting manually, so basically on my build flow instead of letting aapt make the initial apk file i make it with dx instead i.e
dx --dex --verbose --num-threads=4 --output=file.apk obj libs/dex
obj is the folder containing compiled .class files from my code, libs/ includes external jar files, libs/dex holds cached jars of libs/ that are already dexed before for quicker builds, by letting dx make the initial apk it can also bundle the resources in the apk now the apk has the resources as expected, then we use aapt after dx (actually we use it once before dx for generating R.java to be clear but DO NOT create the apk with aapt) with aapt we use package command as always but include the -u flag so it modifies the apk and not rewrite it, now we can package normal android resource stuff and also the external jar's resources in the final apk. to be clear this wasn't meant to be a full clear guide of making an apk by hand you are expected to know aapt/dx toolchain already, I'm just explaining a workaround with the toolchain that solves my problem.
Hello @PoLLeN , I am actually trying to make an apk after made a .jar application with Java and I created that classes.dex file with d8.bat utility....but actually since I'm using d8.bat due it's about Android 14 API SDK 34.0.0 and, looking at d8 documentation on Google page, it seems doesn't have same goal as the old dx to build an android apk... or maybe is there something like a secret documentation about d8.bat which is not on Google documentation as you know?....
|
STACK_EXCHANGE
|
Remove MPI_Barriers before routing to increase speed.
Improve ngen-parallel Performance with T-route
Problem Statement
When running ngen-parallel with -n near or equal to core count, T-route's performance is severely degraded. T-route doesn't parallelize well with MPI, and moving the finalize step to before the routing frees up CPU resources for T-route to use different parallelization strategies.
Changes Made
Moved MPI_Finalize before the routing step
Applied additional T-route performance patch https://github.com/NOAA-OWP/t-route/pull/795
The troute change is semi related as it converts a for loop that consumes the majority of t-route's execution time to use multiprocessing. That performance improvement doesn't work while MPI wait is consuming every CPU cycle.
Performance Results
Testing setup: ~6500 catchments, 24 timesteps, dual Xeon 28C 56T total (same as #843)
Configuration
NGen::routing Time (s)
Serial
13.9106
Parallel (-n 96 unpatched)
37.2
Parallel (-n 56 unpatched)
21.6952
Parallel (-n 56 patched)
10.761
Parallel (-n 56 patched + T-route perf patch)
~6 (more testing needed)
96 was performed on a different 96 core machine
56 were all performed on a 56 core machine
Explanation
T-route has a step that transposes per catchment/nexus files into per timestep files.
This transpose step is the longest part of T-route's execution and scales poorly with more catchments or timesteps.
In the unpatched version, all ranks except zero progress to MPI_Finalize and then begin polling while waiting for the routing thread to finish.
This polling maxes out the CPU on every rank while rank 0 computes the routing.
Moving Finalize before routing ensures all threads are finished before routing begins, preventing CPU maxout.
Future Work
Consider reworking the file transposition step for more efficient I/O (out of scope for this change)
Testing
Used the same setup as #843
Manual runs show consistent improvement, but automated testing may be required for more accurate results (especially for simulation that shows a ~10% improvement with a similar deviation)
Performance Visualization
Perf flamegraph of the entire ngen run (unpatched), should be interactive if downloaded
Additional Notes
Moving the finalize before timing log output appears to make the ngen simulation step ~10% faster, but may affect timing accuracy.
This use case exaggerates T-route performance degradation due to using 56 MPI ranks on 56 cores.
ngen was built using mpich 3.3.2. I saw online that changing the MPI wait policy to poll less frequently should help, but I couldn't get it to work
I'm not entirely sure of the implications of calling return 0; right after finalize for mpi_rank != 0;. I thought finalize would kill/return the subprocesses but without explicitly returning them I got segfaults. Reccomendations seem to be that as little as possible should be performed after calling finalize, but seeing as all computation after that point is done by one thread I can just return the others?
Next Steps
[ ] Conduct automated benchmarks to verify performance improvements to troute
[ ] Conduct automated benchmarks to verify performance improvements to ngen simulation
I really like the idea here, but there's a fundamental challenge. As MPI is specified, we can't actually be confident that anything after a call to MPI_Finalize actually runs. I'll take a deeper look at whether there's a nicer way we can do this.
Ok, I've looked at this a little bit more, and here's my tentative suggestion:
Rather than having all of the non-0 processes finalize and exit after the catchment simulation is complete, have them all call MPI_Ibarrier, followed by a loop of MPI_Test on the Ibarrier request and sleep for a decent time slice (e.g. 100 ms) until it's complete. Rank 0 would only enter the barrier after completing routing.
Is that something you want to try to implement? If not, I can find some time to work up an alternative patch. In the latter case, do you mind if I push it to your branch for this PR?
Ultimately, we expect to more deeply integrate t-route with parallelization directly into ngen with BMI. Until that's implemented, though, it's pretty reasonable to find expedient ways to improve performance.
This version works so far as the waiting mpi threads now no longer max out the CPU, but it's tricky to say what I'm actually measuring in terms of performance since this went through https://github.com/NOAA-OWP/t-route/pull/795
I need to run a test on some data with more than 25 timesteps as the difference with and without this fix is ~<1s, ~9s vs ~10s.
I also wasn't sure if manager->finalize(); could go before the barrier or after it? If the routing was using MPI, would we need to wait for all threads to reach the post routing barrier before calling finalize?
Performance Testing Results
In summary: it makes routing ~1.3x-1.4x faster
Apologies for the previously inconsistent performance testing. The routing configuration has been optimized, and the output format changed from CSV to NetCDF. Here are the updated results using the current version of t-route and an optimized routing configuration:
Test Setup
Cores: 96
Subnetwork target size: 100 (determined to be the fastest from multiple tests)
t-route output format: NetCDF
Catchments: 60,000
Time period: 240 hours (in five-minute timesteps in t-route)
Results
t-route run standalone in NGIAB
Method: Running python -m nwm_routing manually
Execution time: 115 seconds
t-route run via ngen in NGIAB using ngen master
Execution time: 161 seconds
t-route run via ngen in NGIAB using this PR
Execution time: 120 seconds
2024-09-04 23:27:24,582 - root - INFO - [__main__.py:340 - main_v04]: ************ TIMING SUMMARY ************
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:341 - main_v04]: ----------------------------------------
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:342 - main_v04]: Network graph construction: 20.63 secs, 17.12 %
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:349 - main_v04]: Forcing array construction: 29.57 secs, 24.55 %
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:356 - main_v04]: Routing computations: 59.15 secs, 49.09 %
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:363 - main_v04]: Output writing: 10.99 secs, 9.12 %
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:370 - main_v04]: ----------------------------------------
2024-09-04 23:27:24,583 - root - INFO - [__main__.py:371 - main_v04]: Total execution time:<PHONE_NUMBER>9999999 secs
Finished routing
NGen top-level timings:
NGen::init: 103.262
NGen::simulation: 234.654
NGen::routing: 120.578
and that init time can be reduced to 26 seconds this https://github.com/NOAA-OWP/ngen/compare/master...JoshCu:ngen:open_files_first_ask_questions_later
Recording this here with for reference:
I've been doing other testing and the first mpi barrier after model execution, before the routing also has the same issue. On a single machine, when a rank completes simulation it polls as fast as possible, pinning that core at 100% which results in my machine lowering the clock speed of the cpu. It's only slower because of the reduction in clock speed and not as a result of other processes being unable to use the cores (like with t-route) so it's <5% slowdown. It's a marginal difference on a desktop and I'm guessing this wouldn't be a problem on servers with fixed clock speed cpus. Might save some electricity though!
|
GITHUB_ARCHIVE
|
To whom are people talking when they talk to (in) themselves?
It's commonly said that one is talking to (in) his- or himself. Does that mean there are two of the person in question: the one who's talking and the one who's listening?
If they're talking to themselves, there's only one person involved.
Of course, but can't there be an imaginary person in your mind with whom you are conversating? Or are there just memories involved?
Nope, it's still just you. And it's not a memory thing. What you're doing is stimulating your brain in the same way that an external conversation would. It's an interesting feature of neural networks that that recursive stimulation can produce a different state at the end than existed at the start but that's all that's going on.
So you imagine a person with whom you conversate, like in the real world? Doesn't that follow from stimulating your brain in the same way that an external conversation would?
Again no. You're stimulating your brain. It doesn't follow that you're pretending to talk to someone else. Obviously, if you want to fantasize that you're talking to pixies then that's perfectly fine. It's your brain. But you aren't.
Then what do you mean by **stimulating your brain in the same way that an external conversation would **? In an external conversation, your brain is stimulated by another person. So by what is the brain stimulated in the case at hand?
By you. You create the inputs that propagate through the neural network in the same way external inputs would. The outputs are then used to generate new inputs rather than to drive your mouth. Still just one of you. No need for a spooky sidekick. BTW dreaming is the same basic idea but without being directed by consciousness. You can make computer neural nets dream by connecting their outputs to their inputs. It's quite cool.
@JohnForkosh Not sure I agree. The question doesn't refer to imagining a second person. So for us to both be correct we'd have to be using different definitions of 2. Plausible but unlikely. As I say in one comment, I have no problem with one imagining oneself talking to anything. Why limit it to another human? But you aren't, it's still just you.
@JohnForkosh But you'd need to use a definition of "people" that did not include, and was at best tenuously connected to, any common definition of people. If the OP had used the term "personality" then it'd still be no but with some interesting caveats. Even in a field as malleable as philosophy, you can't define people as goldfish without mentioning that's what you're doing.
There are definitions in question here. When you say that the person talking is
talking to (in) his- or himself.
I will assume the current psychological definition of a self. There are many other definitions, if you meant one of those please let us know. An additional assumption I am considering a healthy individual, so we will not be talking about whether multiple personalities constitute different "self" or not. Further, there is an assumption that the self is a single entity in your question so I will not be talking about conversations between say the ego and the id.
Given those constraints, I agree with @Alex there is only one person present.
As for the follow on question, why do it then? I expect that we do this because the act of speaking changes how we think about a topic. It requires more precision and effort. Helps us to clarify our thoughts.
|
STACK_EXCHANGE
|
What does it mean by "topology" in the case of secondary and tertiary structures of proteins?
What does it mean by "topology" in the case of secondary and tertiary structures of proteins?
N.B. I am not talking about DNA/RNA/genomics. I am talking about protein folding, as it relates to protein modeling and simulation.
The word topology gets used in two different contexts.
Common case: the geometric configuration of the protein (=fold)
In common non-computational biochemistry parlance it is just the standard English meaning of topology and just geometry, in the case of protein it means protein fold:
https://en.wikipedia.org/wiki/Protein_fold_class
Wiktionary (https://en.wiktionary.org/wiki/topology) say:
The branch of mathematics dealing with those properties of a geometrical object (of arbitrary dimensionality) that are unchanged by continuous deformations (such as stretching, bending, etc., without tearing or gluing)
In the of protein, unchanged by small changes in length or amino acid sequence identity —no proteases or meat-glue ligase required!
Computational (Bio)chemistry
Topology has a technical jargon definition in computational biochemistry and computational chemistry.
In computational chemistry how the connections of a molecule are configured, expressed as a graph network https://en.wikipedia.org/wiki/Graph_theory, is referred to as topology. For example isopropanol and n-propanol have same number of heavy atom (nodes) and these will have the same hybridisation, but the network will have different connectivities (vertices).
The topology definition generally runs off atom types more so than elements, these combine the element and hybridisation and other properties specific to a certain element in a molecule —e.g. amine vs. amide nitrogen vs. tryptophan ring nitrogens are different.
The 3D embedding of said molecule is not part of its topology. Things get a bit problematic as ideal bond lengths, angles and dihedrals in dihedral space (i.e. not in cartesian space) go into the 'configuration' file (topology file) for several forcefield systems. And in others still there may be constraints or restraints specified.
In computational biochemistry, this holds true for ligands. But with proteins there is less. The connectivity within each amino acids is preset in stone: where one to use the allo isomer of threonine as opposed to the threo one would need to define a non-canonical amino acid. So by topology of a protein it is intended the connectivity of the amino acids in chains, e.g. primary sequence devoid of cartesian coordinates.
The word topology appears often in the context of ligands: these require custom definitions (=topologies) required to use a ligand not parameterised in the force field used, a common source of trouble.
See also
https://manual.gromacs.org/documentation/2019/reference-manual/topologies.html
https://mdtraj.org/1.9.4/api/generated/mdtraj.Topology.html
I am actually talking about protein modeling and simulation. :)
I have amended my answer, but I should warn that by virtue of being used just to mean configuration/definition in compbiochem, there are a hundred and one cases where my answer is 100% wrong due to the layers of complexity.
|
STACK_EXCHANGE
|
package com.greengrowapps.ggaforms.validation.errors;
import android.content.res.Resources;
import java.util.HashMap;
import java.util.Map;
public class ValidationErrorProviderImpl implements ValidationErrorProvider {
private static ValidationErrorProviderImpl instance;
private final Resources resources;
private Map<Class<? extends ValidationError>, ErrorBuilder> validationErrorMap = new HashMap<>();
private ValidationErrorProviderImpl( Resources res ){
this.resources = res;
registerErrorForClass(NullFieldValidationError.class, new ErrorBuilder() {
@Override
public ValidationError build( Object... params ) {
return new NullFieldValidationError(resources);
}
});
registerErrorForClass(NotCheckedValidationError.class, new ErrorBuilder() {
@Override
public ValidationError build( Object... params ) {
return new NotCheckedValidationError(resources);
}
});
registerErrorForClass(ExceedsMaxLengthValidationError.class, new ErrorBuilder() {
@Override
public ValidationError build( Object... params ) {
return new ExceedsMaxLengthValidationError(resources, params);
}
});
registerErrorForClass(ExceedsMinLengthValidationError.class, new ErrorBuilder() {
@Override
public ValidationError build( Object... params ) {
return new ExceedsMinLengthValidationError(resources, params);
}
});
registerErrorForClass(TwinValidationError.class, new ErrorBuilder() {
@Override
public ValidationError build( Object... params ) {
return new TwinValidationError(resources);
}
});
registerErrorForClass(RegexValidationError.class, new ErrorBuilder() {
@Override
public ValidationError build( Object... params ) {
return RegexValidationError.buildFrom(resources,params);
}
});
}
public static void init(Resources resources){
instance = new ValidationErrorProviderImpl(resources);
}
public static boolean isInit(){
return instance!=null;
}
public static ValidationErrorProviderImpl getInstance(){
if(instance==null){
throw new RuntimeException("Class must be initialized. Call init first");
}
return instance;
}
@Override
public ValidationError getValidationError(Class<? extends ValidationError> clazz, Object ... params) {
return validationErrorMap.get(clazz).build(params);
}
public void registerErrorForClass(Class<? extends ValidationError> clazz, ErrorBuilder error){
validationErrorMap.put(clazz,error);
}
}
|
STACK_EDU
|
using System.Collections.Generic;
using sBlog.Net.Domain.Interfaces;
using sBlog.Net.Tests.MockObjects;
using sBlog.Net.Collections;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace sBlog.Net.Tests.Collections
{
[TestClass]
public class ArchiveCollectionTests
{
[TestMethod]
public void Can_Generate_Archive_Collection_With_Required_Month_And_Years()
{
IPost post = new MockPost();
var mockArchives = GetTestArchives();
var archiveCollection = new MockArchiveCollection(post.GetPosts());
foreach (var archive in mockArchives)
{
var archiveFromCollection = archiveCollection.Single(archive);
Assert.IsNotNull(archiveFromCollection);
}
}
private static IEnumerable<Archive> GetTestArchives()
{
var archives = new List<Archive>
{
new Archive {Year = "2012", Month = "04", MonthYear = "April 2012 (7)"},
new Archive {Year = "2012", Month = "01", MonthYear = "January 2012 (7)"}
};
return archives;
}
}
}
|
STACK_EDU
|
import java.lang.management.ManagementFactory;
import com.sun.management.OperatingSystemMXBean;
import java.rmi.RemoteException;
import java.util.*;
public class Monitor extends Thread {
/**
* Esta clase se encarga de monitorizar el uso de CPU, RAM y Disco y lo almacena en variables accesibles
* mediante los metodos get correspondientes. Utiliza librerias del sistema para obtener esos valores.
*/
/* Parametros */
private double cpu;
private long ram;
//volatile private Long disk;
private Integer tiempoMuestreo = new Integer(0); //Para asignar un tiempo de muestreo al monitor en ms.
private boolean threadRunFlag;
private List<Alarma> listaAlarmas;
private ServicioAlarmas srv;
/* Constructor */
Monitor(Integer tiempoMuestreo, ServicioAlarmas srv, List<Alarma> listaAlarmas) {
this.setTiempoMuestreo(tiempoMuestreo);
this.srv = srv;
this.setListaAlarmas(listaAlarmas);
this.start(); // Se inicia el hilo al crear el objeto.
}
Monitor(Integer tiempoMuestreo, List<Alarma> listaAlarmas) {
this.setTiempoMuestreo(tiempoMuestreo);
this.srv = null;
this.setListaAlarmas(listaAlarmas);
this.start();
}
/* Metodos */
synchronized public double getCPU() {
return this.cpu;
}
synchronized private void setCPU(double cpu) {
this.cpu = cpu;
}
synchronized public long getRam() {
return this.ram;
}
synchronized private void setRam(long ram) {
this.ram = ram;
}
/*public Long getDisk() {
return this.disk;
}*/
public Integer getTiempoMuestreo() {
return this.tiempoMuestreo;
}
private void setTiempoMuestreo(Integer tiempoMuestreo) {
this.tiempoMuestreo = tiempoMuestreo;
}
synchronized public void setListaAlarmas(List<Alarma> listaAlarmas) {
this.listaAlarmas = listaAlarmas;
}
@Override
public void run() {
/**
* Metodo run que se ejecuta en otro hilo. Se encarga de obtener los valores de CPU y RAM del PC
* y almacenarlo en las variables de la clase.
*/
try {
Double cpu;
Long ram;
this.threadRunFlag = true;
OperatingSystemMXBean bean =
(com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean();
while (this.threadRunFlag) {
/*if (0 >= (*/cpu = bean.getSystemCpuLoad()/*)) {*/;
this.setCPU(cpu);
//}
this.setRam(this.getPorcentajeMemoria(bean.getFreePhysicalMemorySize(),
bean.getTotalPhysicalMemorySize()));
//TODO disk
try {
if(!this.listaAlarmas.isEmpty()) {
this.compruebaAlarmas();
}
Thread.sleep(this.tiempoMuestreo);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (RemoteException e) {
e.printStackTrace();
}
}
this.join();
} catch (InterruptedException e) {
System.err.println("HILO INTERRUMPIDO\n");
e.printStackTrace();
System.exit(1);
}
}
public void stopThread() {
/**
* Pone a false la bandera threadRunFlag, que es la que permite la ejecución del bucle del hilo
* Al ponerse a false se sale del hilo.
*/
this.threadRunFlag = false;
}
private long getPorcentajeMemoria(Long memoriaLibre, Long memoriaTotal) {
long ml = memoriaLibre.longValue();
long mt = memoriaTotal.longValue();
return (mt-ml)*100/mt;
}
private void compruebaAlarmas() throws RemoteException {
List<Alarma> alarmasActivadas = new LinkedList<Alarma>();
for (Alarma a: listaAlarmas) {
if(a.getParametro().equals("CPU")) {
if(a.getEsMayorQueUmbral()) {
if(a.getUmbral() < this.getCPU()) {
a.setFecha(new Date());
alarmasActivadas.add(a);
}
} else {
if(a.getUmbral() > this.getCPU()) {
a.setFecha(new Date());
alarmasActivadas.add(a);
}
}
} else if (a.getParametro().equals("RAM")) {
if(a.getEsMayorQueUmbral()) {
if(a.getUmbral() < this.getRam()) {
a.setFecha(new Date());
alarmasActivadas.add(a);
}
} else {
if(a.getUmbral() > this.getRam()) {
a.setFecha(new Date());
alarmasActivadas.add(a);
}
}
/*} else if (a.getParametro().equals("DISK")) {
if(a.getEsMayorQueUmbral()) {
if(a.getUmbral() < this.getDisk()) {
alarmasActivadas.add(a);
}
} else {
if(a.getUmbral() > this.getDisk()) {
alarmasActivadas.add(a);
}
}*/
} /*else {
//throw UnexpectedException;
}*/
}
//Enviamos lista de alarmas
if((!alarmasActivadas.isEmpty()) && (srv != null)) {
srv.enviaListaAlarmas(alarmasActivadas);
}
}
}
|
STACK_EDU
|
Editing a native method class with javassist?
With Javassist, is there any way to inject code into a native method? In this case, I'm trying to make the OpenGL calls in my game print out their names and values when called, but all my attempts have hit errors when I assume the openGL dll code is added.
The method would look something like:
public static native void glEnable(int paramInt);
Since the methods initially have no body, the only way I've found to actually add the code is with something like:
CtBehavior method = cl.getDeclaredBehaviors()[0];
method.setBody("System.out.println(\"Called.\");");
The injection itself works, but then the system fails once the library is loaded saying that the method already has code.
I'd rather not use any premade tools for the call tracking, because of the way I need to format and print out the list for the user. Is there any way to handle this?
If not, is there some way to find all calls to an OpenGL method within another class and append an additional call to a tracker class?
With Javassist, is there any way to inject code into a native method?
Never tried it, but I am not surprised it does not work. Native code is - native. It's a bunch of platform specific bits that bears no relation to Java byte code. And Javassist is all about Java byte code.
Have you consider using proxy based AOP? Check out http://static.springsource.org/spring/docs/current/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
I'm not recommending you actually use Spring in your program but it might give you some ideas on how to approach the problem. The reason I think proxy-based AOP might work for you is that you leave your OpenGL based class alone and it just uses the normal native methods. You generate a proxy class which is pure Java but has the same methods as your original class. You call methods on the proxy class which contain the desired call tracking code, plus the invocation of the corresponding method on the "plain object" with it's native methods.
The documentation in Spring says they use JDK dynamic proxies or CGLIB under the covers. So ... I'm thinking that you could use one of these technologies directly as a replacement for your javassist solution.
Hope this helps.
[update]
In the text above I thought you were talking about a class written by you which had primarily instance methods. If you are talking about wrapping the entire OpenGL API, which is primarily static methods, then the AOP proxy method is less appealing. How bad do you want to do this? You could:
create a custom class - a singleton class with a factory method. Your singleton class wraps the entire OpenGL API. No logging/tracking code. Just naked calls to the API.
modify every single call in your entire app to use your wrapper, instead of calling OpenGL directly
At this point you have an application that works exactly like what you have now.
Now enhance the factory method of your singleton class to return either the bare-bones instance which does nothing except OpenGL calls, or it can return a CGLIB generated proxy which logs every method. Now your app can run in either production mode (fast) or tracking mode depending on some config setting.
And I totally get it if you want to give this up and move on :)
Sounds interesting. Is there any good way to dynamically generate a proxy class, or would this require editing all my code to reference a premade proxy?
Well, I just found a cglib tutorial at http://markbramnik.blogspot.com/2010/04/cglib-introduction.html which will give you an idea of what you would be in for. Based on the tutorial, I think the answer is "yes" it does dynamically generate the proxy class.
The generation of the proxy is one thing, but this looks like it'd require that I rewrite every line of code in my game to generate the proxy and call that instead of the OpenGL class. Perhaps I need a different approach. Is there some way to find all calls to an OpenGL method and append an additional call to a tracker class?
I know it is a little offtopic (2 years later), but if somebody was interested, I think it can be done with setNativeMethodPrefix() method and adequate bytecode transformation.
I know this thread is old and has accepted answer and hint to my answer as well but I suppose what I can save someone's time with more details and code. I wish someone did it to me some time ago =)
I tried to wrap java.lang.Thread#sleep function. I found examples with bytebuddy but with method replacement and no examples for javassist. Finally I did it. First step is to register transformer and setup prefix:
instrumentation.addTransformer(transformer, true);
instrumentation.setNativeMethodPrefix(transformer, "MYPREFIX_");
Then inside transformer modify Thread class:
@Override
public byte[] transform(ClassLoader loader, String className, Class<?> classBeingRedefined, ProtectionDomain protectionDomain, byte[] classfileBuffer) throws IllegalClassFormatException {
if (!"java/lang/Thread".equals(className)) {
return null;
}
try {
// prepare class pool with classfileBuffer as main source
ClassPool cp = new ClassPool(true);
cp.childFirstLookup = true;
cp.insertClassPath(new ByteArrayClassPath(Thread.class.getName(), classfileBuffer));
CtClass cc = cp.get(Thread.class.getName());
// add native method with prefix
CtMethod nativeSleep = CtMethod.make("void MYPREFIX_sleep(long millis) throws InterruptedException;", cc);
nativeSleep.setModifiers(Modifier.PRIVATE + Modifier.STATIC + Modifier.NATIVE);
cc.addMethod(nativeSleep);
// replace old native method
CtMethod m = cc.getDeclaredMethod("sleep", new CtClass[]{CtClass.longType});
m.setModifiers(Modifier.PUBLIC + Modifier.STATIC); // remove native flag
m.setBody("{" +
"System.out.println(\"Start sleep\");" +
"MYPREFIX_sleep($1);" +
"}");
byte[] byteCode = cc.toBytecode();
cc.detach();
return byteCode;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
Then start transformation:
instrumentation.retransformClasses(Thread.class);
One of problems with classes like java.lang.Thread is that in general You cannot invoke your code from modified method unless You push something in bootstrap classloader.
See also https://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html#setNativeMethodPrefix-java.lang.instrument.ClassFileTransformer-java.lang.String- for details how jvm resolves method.
ainlolcat, how did you get the java/lang/Thread class to show up? Since this class is loaded by the sytem boot-class loader. Are you doing some trickery with the book-class-path? I would love to see your instrumentation agent setup code.
@ril3y I dont see any missing essential parts in my setup. there is one important note - since java 13 this code does not work without option -XX:+AllowRedefinitionToAddDeleteMethods. This option is marked as deprecated and can be removed but still available in J17.
@ril3y may be You mentioned problems with access from injected code to agent? It cannot access agent static method because it cannot access class from system class loader, yes. To solve this I inject simple class with static runnable/callable field in bootstrap classloader via Unsafe.defineClass with classloader null and then call it from new body. Then I inject from system CL runnable implementation into that simple class. It can access classes from system CL. Make sure You use shared ClassPool or make new class visible during Thread compilation.
|
STACK_EXCHANGE
|
Not able to connect to LocalStack Sqs URL
Hi Folks,
I have been struggling to figure out why Im not able to connect to LocalStack sqs URL. I have setup LocalStack to run the queue in my local. I was able to send and receive the message using cli commands and also I was able to publish the message to this queue from my other non nestjs application but the ssut/nestjs-sqs throws the below error and I'm not sure why.
api/node_modules/sqs-consumer/dist/errors.js:40
const sqsError = new SQSError(message);
^
SQSError: SQS receive message failed: The address https://sqs.us-east-1.amazonaws.com/ is not valid for this endpoint.
at toSQSError (/api/node_modules/sqs-consumer/dist/errors.js:40:22)
at Consumer.receiveMessage (/api/node_modules/sqs-consumer/dist/consumer.js:173:43)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
Im facing the same issue the only solution i found was to instantiate my own SQS Client
const sqsClient = new SQSClient({
region,
endpoint: 'LOCALSTACK_URL'
}) ;
Im facing the same issue the only solution i found was to instantiate my own SQS Client
const sqsClient = new SQSClient({
region,
endpoint: 'LOCALSTACK_URL'
}) ;
@agnarok Could you please let me know where exactly you instantiate the client and consume it? I have own service file and I'm consuming the message like below
@Injectable() export class QueueHandler { constructor(private pService: PService) {} @SqsMessageHandler(queueName, false) async handleMessage(message: SQS.Message) { const logger = new Logger(this.handleMessage.name); try { //Business logic } catch (error) { logger.error(error.message, error.stack); throw error; } }
SQSClient is an option for the consumer so your module should be something like this :arrow_down:
import { Module } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { SqsModule } from '@ssut/nestjs-sqs';
@Module({
imports: [
ProposalsModule,
SqsModule.registerAsync({
useFactory: (configService: ConfigService) => {
return {
consumers: [
{
name: configService.getOrThrow('AWS_SQS_QUEUE_NAME'),
queueUrl: 'http://localhost:4566/000000000000/queueName',
sqs = new SQSClient({
region: 'REGION',
endpoint: 'LOCALSTACK_URL'
}) ;
},
],
producers: [],
};
},
inject: [ConfigService],
}),
],
providers: [QueueHandler],
})
@agnarok I'm really confused now. SQSClient is not part of this library so, not sure how do I instantiate it. Also is queueUrl and endpoint are same?
@VivekPNs SqSClient is used internally in the lib, so you would need to import in your project to use it like i suggested.
Also, no endpoint and the queueUrl are not the same. The endpoint is used in the AWS SDK for pooling data
@agnarok But SqSClient is not part of the @ssut/nestjs-sqs library. Could you provide some snippets to understand better?
Finally, it works for me
this is a full code for ConsumerModule
don't forget to import ConsumerModule in the app module
`import { ConfigModule, ConfigService } from '@nestjs/config';
import { Module } from '@nestjs/common';
import { SqsModule } from '@ssut/nestjs-sqs';
import { ConsumerService } from './consumer.service';
import { SQSClient } from '@aws-sdk/client-sqs';
@Module({
imports: [
ConfigModule,
SqsModule.registerAsync({
imports: [ConfigModule], // Import the ConfigModule to use the ConfigService
useFactory: async (configService: ConfigService) => {
const accessKeyId = configService.get('sqs.accessKeyId');
const secretAccessKey = configService.get(
'sqs.secretAccessKey',
);
// Retrieve the required configuration values using ConfigService
return {
consumers: [
{
name: configService.get<string>('sqs.queue_name'), // name of the queue
queueUrl: configService.get<string>('sqs.url'), // url of the queue
region: configService.get<string>('sqs.region'), // using the same region for the producer
batchSize: 10, // number of messages to receive at once
terminateGracefully: true, // gracefully shutdown when SIGINT/SIGTERM is received
sqs:new SQSClient({
region:configService.get<string>('sqs.region'),
credentials: {
accessKeyId: accessKeyId,
secretAccessKey:secretAccessKey
}
})
},
],
producers: [],
};
},
inject: [ConfigService],
}),
],
controllers: [],
providers: [ConsumerService],
exports: [ConsumerService],
})
export class ConsumerModule {}
`
@VivekPNs
Yup, I did the same way and it works. Thank you.
I apologize for cracking open an old issue, but I'm unable to find any more information on what I've found here.
This issue and one other article both have this attribute in the consumer options:
terminateGracefully: true, // gracefully shutdown when SIGINT/SIGTERM is received
Where did this come from and what did it do? I simply want to ensure that my NestJS app completes processing the message it's currently handling before terminating on a SIGINT or SIGTERM from ECS, might this help?
Cheers and thanks in advance!
Dropping a note here mostly b/c I know I'm going to be troubleshooting this same issue in ~5 years and will have forgotten WTF I did to fix it.
I had this same issue where the Nest.js app using this library couldn't communicate w/ the SQS queue I created in Localstack. The specific error I was getting was
/opt/transaction-monitoring/node_modules/.pnpm/sqs-consumer@11.1.0_@aws-sdk+client-sqs@3.664.0/node_modules/sqs-consumer/dist/cjs/errors.js:60
const sqsError = new SQSError(message);
^
My setup was different though. I was using docker compose to spin up 2 services a localstack service and a transaction-monitoring-svc. The Nest.js/SQS consumer lived in the transaction-monitoring-svc container and the SQS Queue obviously lived in the localstack container.
So when registering the SqsModule I had to use http://localstack:4566 instead of http://localhost:4566 as the endpoint url, which is a subtle difference, but very necessary so that docker can resolve the IP address for the localstack container.
Here's a snippet showing how I registered the SqsClient
import { Module } from '@nestjs/common';
import { ConfigModule, ConfigService } from '@nestjs/config';
import { SQSClient } from '@aws-sdk/client-sqs';
import { SqsModule } from '@ssut/nestjs-sqs';
@Module({
imports: [
ConfigModule.forRoot({
isGlobal: true,
load: [configuration],
}),
HttpModule,
SqsModule.registerAsync({
imports: [ConfigModule],
useFactory: (cfg: ConfigService) => {
return {
consumers: [{
name:'transactions',
queueUrl: cfg.get('transactionMonitoringQueueUrl'),
sqs: cfg.get('env') === 'development' ? new SQSClient({
region: 'us-east-1',
endpoint: 'http://localstack:4566' // This was the http://localhost:4566 which would have worked if I was running the app from my laptop instead of within a docker contianer.
}) : new SQSClient()
}]
}
},
inject: [ConfigService]
})
]
})
export class AppModule {};
|
GITHUB_ARCHIVE
|
Linux is among the most popular operating systems among developers and technical people. It is usually preferred to accomplish many tasks. Some of the most popular uses of the operating system are as a server OS, for networking, and for coding.
Nonetheless, it comes in many flavors to meet different requirements, ranging from personal desktop computing to dedicated containerization.
What is Linux?
Along with Windows and macOS, Linux is one of the most popular operating systems. It is a Unix-like OS for PCs, servers, mainframes, mobile devices, and embedded devices that is open-source and maintained by the community.
The main difference between Linux and Windows is that while the former is free, the latter is a paid and proprietary product. Anyways, Linux is one of the most extensively supported operating systems, as it is supported on practically every major computer platform, including x86, ARM, and SPARC.
What is an Operating System?
An operating system is a piece of software that controls all of your computer’s hardware. Simply put, the operating system is in charge of software-hardware communication.
An OS acts as the medium between user and system hardware. It is a type of software that is responsible for providing control of the hardware to the user using system software and applications. Moreover, without the operating system, a machine cannot run the software.
Components of the Linux Operating System
Every operating system has some components, and Linux is no exception. The following list enumerates and explains the various components of the Linux operating system:
It is that part of the OS that is “closest” to the computer’s hardware because it controls the CPU, memory access, and any peripheral devices. It’s your operating system’s “lowest” degree of functionality.
The purpose of the shell is for you to be able to tell the operating system what to do. It’s also known as the command line, and it’s a feature that allows you to give commands to the popular operating system.
However, command-line programming is unfamiliar to many people nowadays, and it used to be a deterrent to using the OS. This has changed since a modern Linux distribution will employ a desktop shell similar to that of Windows.
Your computer must go through a startup process known as booting. This boot process requires instructions, and your operating system is the program in charge of it. The bootloader for your OS kickstarts the process when you turn on your computer.
4) Background Services
These little apps, known as “daemons,” serve as slaves in the background, ensuring that critical tasks such as scheduling, printing, and multimedia work properly. They begin to load after you have logged into your computer or after you have booted up.
5) Graphics Server
This creates a graphical subsystem that displays images and shapes on your computer screen. The “X” or “X-server” graphical server is used by Linux.
6) Desktop Environment
The graphical server cannot be directly interacted with. Rather, you’ll require server-control software. In Linux, this is referred to as a desktop environment, and there are numerous options available, such as KDE, Unity, and Cinnamon. A desktop environment typically includes a variety of programs, such as a file manager, web browsers, as well as a few games.
Obviously, the desktop environment that comes with your Linux OS or that you choose to install will not be able to meet all of your application requirements; there are far too many.
Individual applications, on the other hand, can be installed, and there are hundreds for this operating system, just as there are thousands for Windows and macOS. Most Linux distributions, such as Ubuntu, offer app shops that can help you find and install the software.
A Linux distribution (Linux distro) is a collection of Linux packages. It is an operating system consisting of a collection of software based on the Linux kernel, or a distribution that includes the kernel as well as supporting libraries and software.
Owing to the popularity enjoyed by the OS for its high portability and customizability, Linux distros come in many flavors. Different Linux distributions are apt for different user needs. Some of the leading distributions of the OS are:
- Debian – It is the popular Linux distribution upon which Ubuntu is built. It emphasizes the use of free and open-source software.
- Fedora – It is a distribution of the OS developed by Red Hat and numerous sponsors. Fedora aims to be a complete and convenient OS for developers and makers.
- Linux Mint – This is a distribution that aims to be a desktop OS suitable for both home users and companies for free. Mint focuses on efficiency, elegance, and ease of use.
- Manjaro – It is an Arch-Linux-based distro that focuses on accessibility and user-friendliness.
- Solus – This Linux distribution is designed for home computing.
- Ubuntu – It is one of the most popular distributions of the operating system that usually made into the top 10 best Linux distributions. The modern distro comes in three variants, namely Desktop, Server, and Core. The latter is ideal for IoT and AI.
Almost any Linux distribution can be downloaded for free, burned to disc (or installed on a USB thumb drive), and used (on as many machines as you like).
On the desktop, each distribution has its own approach. Some come with more modern user interfaces while others feature a more conventional desktop environment.
Advantages of Linux
- Since the OS is open-source, it is freely available for download so, even if any updates are there then also no extra cost has to be paid for updation or registration.
- The operating system is adaptable, in the sense that it can be put on any hardware; if a user is unsure what OS can be installed on her machine, she can use Linux.
- It has a Unix-based security mechanism that is extremely safe against internet threats and other assaults.
- Linux is designed to operate continuously without restarting, and many applications can be scheduled during quiet hours as a result of this capability.
- Because the OS is open-source, it may be adapted to meet specific needs, and issue fixes can be found quickly.
- Linux commands are simple to start with but can be powerful.
Disadvantages of Linux
- It is not particularly user-friendly, and it might be perplexing to newcomers.
- The GNU Public License (GPL) stipulates that anybody can modify and distribute a modified version of Linux. As a result, it’s a little difficult to find a version that meets our requirements.
Why Use Linux?
We rarely wonder why we need to change operating systems because the majority of us use desktop operating systems that come prepackaged with our computers and laptops. Only a few people are interested in learning a new operating system, and they rarely inquire about Linux since they believe their current operating system is adequate.
However, it is not always clear how much time is wasted battling basic OS issues such as unwanted software such as viruses, as well as frequent OS crashes and the resulting expensive repairs. Remember that most operating systems include a license price as well.
It’s possible that your current operating system isn’t up to the task. If you’re sick of paying for an operating system and dreading the expensive upkeep that comes with it, consider Linux as it might be a better, free option. There is no cost to try the OS, and many people (especially developers and technical people) consider it to be the most stable desktop operating system.
Also, it is a more secure operating system than most other operating systems. Linux and Unix-based operating systems have fewer security vulnerabilities since the code is regularly checked by a large number of developers. Moreover, the scope of customization is huge as Its source code is also available to anyone.
In this blog, we have covered what a Linux operating system is and its components. We further have gone through why one should use the operating system, and its advantages and disadvantages.
This operating system comes with multiple versions and each of them has a different approach to computing. Hence, which version is suitable for beginners and which is for seasoned developers depends completely on your needs.
|
OPCFW_CODE
|
Avant-garde, Experience Designer & Coder. Highly passionate about blending technology and arts to creatively and efficiently solve problems.
|African Women in Leadership Organisation||
AWLO International Headquarters, Lagos - Nigeria
|Chief Technology Officer||
Mar. 2017 to Current
- improved the load time of websites, the speed of email delivery and other resources by setting up cloud-based VPS and custom mail servers - a departure from the shared hosting used before.
- built the organisation's restful APIs.
- introduced and implemented Continuous Development/Integration/Deployment to the organisation's software culture by setting up the organisation's Git account, repositories and CI/CD pipelines.
- reduced communication delays by over 300% by implementing transactional emails and automated marketing emails.
- introduced and implemented electronic payments/donations/conference management.
- increased the quality of the technology workforce by introducing and managing the technology internship/mentoring program of the organisation.
My responsibilities move along all levels of the organisation. I am responsible for:
- working with the President and other executives to develop a technical strategy for the company; this involves goal-setting, discussing options and analyzing risks.
- aiding recruitment and retention efforts, streamlining operations and advocating for innovative ideas and individuals on the team.
- keeping up on competitive technology trends, both in the market and among partners.
- keeping an eye for new technological developments that can help the organisation improve efficiency and stakeholders satisfaction.
- working with the marketing team to develop strategies and plan community-related efforts.
- building confidence in the company's vision by using technology to drive transparency.
Nov. 2015 to Mar. 2017
- increased operational efficiency by 32% by implementing automated scheduling of programming content.
- started online radio live streaming.
- led the organisation's efforts to move to a Visual Effects (VFX) powered WebTV.
Apr. 2020 to Current
A robust and fully functional restful API for Hope Behind Bars Africa Initiative.
Node.js, Express.js, and TypeScript.
Mar. 2020 to Mar. 2020
A bot that reminds you to take your medication/drugs by sending friendly and customised reminder messages via SMS.
Node.js, Notifications: Nodemailer(emails), SMS API (sms), cron (reminders).
Feb. 2019 to Feb. 2019
A web app that enables you to invite your friends for a conference by sending a pre-written and customised SMS to them with your name as the sender ID and their names in the salutation.
Link here: https://awlo.org/awlc/inviteafriend/
Jan. 2019 to Feb. 2019
An app for Hope Behind Bars Africa to help indigent prison inmates find legal representation by connecting them with pro bono lawyers.
|Hope Behind Bars Africa Initiative · Volunteer Solutions Architect & CTO||
Dec. 2018 to Current
|United Nations Volunteers · Volunteer||
Jan. 2019 to Current
|Google Local Guides · Local Guides||
May 2015 to Current
Responsible for adding new and missing places and roads on the Google Map.
|United Nations Women/AWLO HeForShe · Volunteer Technical Director||
Dec. 2017 to Current
- conceptualised, designed and implemented the database for the AWLO HeForShe commitment capture.
- put together the Content Management System based website https://awlo.org/heforshe to carry forums, resources and many more features to help promote and scale the campaign in the region of Africa.
|University of the People, California||
Sept. 2018 to Current
|University of Uyo, Nigeria||
|Saint Mary's Senior Science College, Ediene-Abak||
|
OPCFW_CODE
|
package engine.concrete;
import elements.abstracts.characters.*;
import engine.helpers.*;
public class Engine {
private static Engine instance = new Engine();
private boolean gameOver;
private boolean fieldUndone;
private boolean gameWon;
private Field field;
public void setFieldUndone() {
this.fieldUndone = true;
}
private Engine() {
};
public static Engine getInstance() {
return instance;
}
public boolean gameWon() {
return this.gameWon;
}
public void gameOver(boolean gameWon) {
this.gameOver = true;
this.gameWon = gameWon;
}
public void newGame() {
this.gameOver = false;
this.gameWon = false;
}
public void start(Field field) {
this.field = field;
FieldCaretaker.getInstance().saveField(this.field);
while (!this.gameOver) {
Logger.getInstance().promptUser();
this.field.getHero().takeTurn(this.field);
this.field.removeDeadEnemies();
this.checkForWin(this.field);
if (!this.fieldUndone) {
for (Enemy enemy : this.field.getEnemies()) {
enemy.takeTurn(this.field);
}
this.checkForLoss(this.field);
FieldCaretaker.getInstance().saveField(this.field);
} else {
fieldUndone = false;
}
Logger.getInstance().printMessage(this.field.toString());
}
}
private void checkForWin(Field field) {
if (field.getEnemies().isEmpty()) {
Logger.getInstance().printMessage("You win!");
Engine.getInstance().gameOver(true);
} else if (field.getEnemies().size() == 1
&& field.getEnemies().get(0) instanceof CapturingEnemy) {
Logger.getInstance().printMessage(
String.format(
"%s got lonely and committed suicide! You win!",
field.getEnemies().get(0).getName()));
field.removeSuicidalEnemy();
Engine.getInstance().gameOver(true);
}
}
private void checkForLoss(Field field) {
if (field.getHero().getHealthPoints() == 0) {
Logger.getInstance().printMessage("You lost!");
Engine.getInstance().gameOver(false);
}
}
public FieldMemento saveField() {
return new FieldMemento(this.field);
}
public void restoreField(FieldMemento memento) {
this.field = new Field(memento.getField());
}
public static class FieldMemento {
private Field field;
private FieldMemento(Field field) {
this.field = new Field(field);
}
private Field getField() {
return this.field;
}
}
}
|
STACK_EDU
|
I have been using Vim for about 2 years on my personal C++ project on Ubuntu with great success. I build using
make, as my build system. In vim, I can build the project without problem using
:make. My personal project has about 200 files.
In my job, the project I'm working on has 7 millions C++ files (over 35k files) and has been, from day one, a project build for Microsoft Windows, using Visual Studio. I have installed
vim at my job, trying to use it to code. So far, I have not been able to "integrate" it successfully to my workflow for two main reasons:
Finding files is ridiculously slow: on my personal project, using
:findis instantaneous. I can use wildcard without problems, I get nice fuzzy file search right out of the box. For finding expressions, I can use
vimgrepand results come out fast. On my job's code base, finding a file or an expression can take from couple seconds to 30+ seconds. We have "a lot" of nested directories.
I'm unable to make
:makework appropriately: I have tried using
msbuild(see here for example), with no success. Building a single file (as
Ctrl+F7does within Visual Studio) would suffice, but I can't seem to make it work.
If I could make work both 1. and 2., I would be satisfied. I could do about 90% of my editing/compiling from
vim. Is there a way to achieve this?
- I have used
VsVimfor some time. It is not so bad (in fact it's pretty great) but is not enough for me. Some options are missing (the vim help, for instance).
- We work on Windows 10/11 and use Visual Studio 2022.
- For the 2. I have tried a lot of things. The last thing I tried is this:
The command works in the terminal (it builds the solution and I see the output). From
au FileType cpp set makeprg=devenv\ MySolution.sln\ /build\ "Debug|x64" au FileType cpp set errorformat=\ %#%f(%l)\ :\ %#%t%[A-z]%#\ %m
vim. When I type
:makenothing seems to happen and I get the following output in the terminal:
Then vim "hangs" and if I go check the specified
C:\WINDOWS\system32\cmd.exe /c (devenv MySolution.sln /build ^>C:\Users\johndoe\AppData\Local\Temp\VIe1D01.tmp 2^>^&1)
tmpfile, I see the build output. In my personal project, I see the build output directly in Vim.
|
OPCFW_CODE
|
Since you’ve already applied the “mediumtext” workaround to c08, that leaves c20 as a possible culprit, since it’s also a very long string (around 46k characters). While 46k is smaller than the maximum size for a text column, which is 64k, I think it’ll depend upon which character set is being used. (For example a UTF-8 character is often two bytes long – and can be up to four bytes in more esoteric cases.)
I’m running Maria 10.4 on a Win 10 PC and testing on a Vero 4K. I set my Avenengers Endgame folder scraping settings to exactly how you posted. I refreshed from the internet and got the same error message in my logs and a failed scrape. I then edited c08 and c20 to mediumtext and tried it again and was able to scrape successfully. I think this probably lends further support to my earlier statements about the db version.
On a related note I question the need to basically check every box in the scraper settings. This makes for a very large increase in the amount of data being stored in the db.
This is now about getting the MariaDB working, so if you want me to post a new thread, let me know. Otherwise…
I set up a new, empty DB using MariaDB on my headless Ubuntu server. I verified remote access is working via DBeaver and can browse the default databases there. I changed one of my Kodi instances (the NUC running Window 10) to point to the Ubuntu server in advancedsettings.xml. I then refreshed one movie folder in my library by changing the content type back and forth, so it would ask about updating the contents. That finished, but I don’t see the myvideos116 database on the Ubuntu server, so I don’t believe it’s actually using the central DB and is instead using local info.
Any idea where I should could start my troubleshooting? The Kodi log doesn’t seem to be giving me any hints, but I may not know what I should be looking for.
Since you guys are helping me out and to be fair, I instead set up the Vero to access the new DB. After pointing the Vero to the new DB, I rebooted the Vero, and I needed to set the content type on the sources again. As soon as it started scanning, the DB was created and getting populated. I’m not sure why the other Kodi device didn’t do so, but I’m not too concerned about that at the moment.
I’m going to let the DB get fully populated and then see what the status of the problematic movies are. I haven’t yet changed any of the columns to mediumtext yet, as I want to take this one step at a time.
Thanks for all the help so far. I’ll return with more updates as this progresses.
EDIT: I realized I didn’t answer your question. Yes, did.
And old versions of the DBs don’t handle mixed charsets very well. (sizing by “bytes” instead of “characters” for example, which is often the cause of oversized DB saves… (client app sends 40 characters, which it knows is less than 64 chars… but the DB then see it as 80 bytes and vomits… nowdays DB’s pretty much always assume UTF-8 instead of ASCII like in the old days) Upgrading the DB is probably the easiest solution. Usually, you can just upgrade in place and it will convert your schemes for you. (including mysql->mariadb if I recall correctly)
|
OPCFW_CODE
|
Repackaging the OpenPAYGO-Token as a python module
This can be repackaged into a python module for easy inclusion in other project
Hi all! Thanks a lot for your work. Sadly I do not have the ability to approve PRs on this repo anymore but I integrated some of your changes into our Fork and published it on PyPi here: https://pypi.org/project/openpaygo-token/
You can just do pip install openpaygo-token to install it.
Hi @benmod, sorry there was a problem, now fixed. You shall now be able to approve PRs.
Please which hardware and version is suitable for the Openpaygo Integration...I have the Arduino UNO R3, is it suitable or is there another board smaller than this for the integration? A picture of the setup will be appreciated.
There is example implementation for Arduino shown here
https://github.com/EnAccess/OpenPAYGO-HW
Shawnmich7700 @.***> schrieb am Mi. 5. Okt. 2022 um
08:27:
Please which hardware and version is suitable for the Openpaygo
Integration...I have the Arduino UNO R3, is it suitable or is there another
board smaller than this for the integration? A picture of the setup will be
appreciated.
—
Reply to this email directly, view it on GitHub
https://github.com/EnAccess/OpenPAYGO-Token/pull/10#issuecomment-1267999289,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABSSBCJ2U4NRWXHROUSGI7LWBUNVTANCNFSM4UKU2UWA
.
You are receiving this because you authored the thread.Message ID:
@.***>
Thanks for your response...i will proceed with it and revert back to you.
In addition, how can I register on the application or how can I generate
tokens for my customers after programming weekly, and monthly subscriptions?
On Tue, Oct 11, 2022 at 8:26 AM Muhammad Ahmad @.***>
wrote:
There is example implementation for Arduino shown here
https://github.com/EnAccess/OpenPAYGO-HW
Shawnmich7700 @.***> schrieb am Mi. 5. Okt. 2022 um
08:27:
Please which hardware and version is suitable for the Openpaygo
Integration...I have the Arduino UNO R3, is it suitable or is there
another
board smaller than this for the integration? A picture of the setup will
be
appreciated.
—
Reply to this email directly, view it on GitHub
<
https://github.com/EnAccess/OpenPAYGO-Token/pull/10#issuecomment-1267999289
,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/ABSSBCJ2U4NRWXHROUSGI7LWBUNVTANCNFSM4UKU2UWA
.
You are receiving this because you authored the thread.Message ID:
@.***>
—
Reply to this email directly, view it on GitHub
https://github.com/EnAccess/OpenPAYGO-Token/pull/10#issuecomment-1274210091,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/A3NVGNHWE3MSRO4P266MIFDWCUJA3ANCNFSM4UKU2UWA
.
You are receiving this because you commented.Message ID:
@.***>
Resolved via https://github.com/EnAccess/OpenPAYGO-Token/pull/18
|
GITHUB_ARCHIVE
|
【英語記事】Eloqua forms 最新情報
Latest update 13:13 EST, March 23rd
Oracle has released the following statement: “Please be advised that we are re-aligning our timing for the changes to Form Field lengths within Eloqua. No changes will take place with form field lengths (new or existing) as part of our upcoming 487 Release (April 4-23, 2017), and no action is required at this time. Stay tuned as we will provide additional information shortly.”
Original article follows…
The upcoming Eloqua release impacts form behavior in another big way. As you may have heard, forms will only be able to collect a maximum of 35 characters per field, regardless of the form type or use case.
This field length validation is being enforced at the server level. The implication is that if a user enters more than 35 characters in a single form field, Eloqua will discard the entire submission and the form submitter will not be redirected to the intended destination (gated asset, thank you or confirmation page, etc.) but to an error page.
Besides negatively affecting the user experience, this will have an impact on metrics and reporting downstream, as the discarded data is not captured within the form data or preserved elsewhere. As a result, it will not even be possible to see how many users have encountered this failure.
The latest update following community feedback
Oracle has now re-assessed the situation and is altering the way in which this 35 character limit will be rolled out – this is good news!
Previously, all existing form fields would have this validation step enabled if 90% of form submissions contained less than 35 characters in that field. The new criteria for the imposition of the 35 character limit is as follows:
1. Changes limited to the following field types: Single Text Fields, Hidden Fields and Campaign ID Hidden Fields
2. 100% of Form Submission data less than 35 characters and have over 1000 Form Submissions for that field. This includes scenarios where the submitted value is blank
3. If a max character length was already configured by the customer on some fields, they won’t be altered
4. Max length should not be less than minimum length
5. Skip 35 characters validation when the field HTML Name is set to elqCustomerGUID
6. Email address fields will be omitted from any changes
At the time of writing, the primary article on Eloqua Insiders describing this change has not been updated.
Overall, this change will mean that not all forms are immediately affected, and what’s more, not all fields are affected. However, there will still be edge cases where a field has collected less than 35 characters consistently to date, but may collect more than 35 characters tomorrow.
What you need to do right now
Oracle will shortly be sending each account a report identifying the forms and specific fields impacted by this change. A careful scrutinizing of form structure, data collection use cases and fields will need to be performed by your marketing operations team. New guidance will need to be issued when creating new forms and modifying existing ones on how to manage this, especially in the scenario where a form’s primary use case is modified.
If you require any further information or assistance, please do not hesitate to contact your account team or technical contact at MarketOne.
|
OPCFW_CODE
|
The Windows and Office Genuine ISO Verifier 188.8.131.52 Portable Free Download lets in the verification of Windows and Office x32 x64 (ISO, EXE). Finds ISO hash and compares Windows and Office Genuine ISO Verifier with the reputable hash from MSDN and VLSC. The application is free.
Windows and Office Genuine ISO Verifier 184.108.40.206 Portable Free Download Overview
Windows and Office Genuine ISO Verifier Portable is a lightweight piece of software program that permits you to decide whether or not you’ve got got a proper replica of Windows or Office with minimum effort.
Straightforward set up and intuitive interface:
Since the utility is available in a transportable package, the setup is a short count of decompressing the archive withinside the favored vicinity in your tough disk. However, if you are having a tough time gaining access to the report, then you definitely need to bear in mind Unblocking the device from Properties withinside the context menu.
Even 11 though it isn’t always precisely eye-candy, the interface is user-pleasant and not likely to motive you any actual issues whilst navigating. In truth, the UI is constructed from a single, medium-length window that functions in very intuitive fields. Consequentially, functionality-wise, the use of the device involves specifying the enter report and the app does the relaxation of the activity automatically.
Windows 11 Genuine Verifier helps several variations and languages of Office and Windows:
Furthermore, the sturdy factor of this system stems from the truth that its miles are designed to understand a good-sized array of hashes that can be related to Microsoft’s running machine and Office bundle. To be greater precise, the software can examine report SHA-1 to particularly antique Microsoft releases, along with Office ninety-five and Windows XP.
On an aspect note, if you already executed a test for the report on the use of different 0.33 celebration software program solutions, then you may upload the SHA-1 withinside the committed subject and decide the authenticity of the report at the spot. Then again, verifying an ISO report isn’t always a prolonged procedure, and need to now no longer take an excessive amount of your time.
A beneficial app that could spare you criminal complications:
In the eventuality which you need to reinstall Windows or Office at paintings or home, however, you do now no longer recognize an excessive amount of approximately the report you stumbled across, then possibly Windows and Office Genuine ISO Verifier allow you to decide when you have the real copies of the software program solutions.
Features of Windows and Office Genuine ISO Verifier Download
Below are a few excellent functions you may revel in after the set up of Windows and Office Genuine ISO Verifier Download please maintain in thoughts functions may also range and completely rely on in case your gadget helps them.
- All in all, Easy to use and completely hassle-free.
- However, the ability to detect different versions of Windows 10.
- In addition, the ability to detect different versions of Windows 7,8,8.1.
- Furthermore, Ability to recognize different versions of Office 2003 and 2007.
- Ability to identify different versions of Office 2016, 2013, 2019.
- Finally, the ability to detect different versions of Windows Server 2003 to 2019.
Technical Details for Windows and Office Genuine ISO Verifier Portable
Software Name: Windows and Office Genuine ISO Verifier 220.127.116.11 Portable Free Download
- Software File Name: Win.Off.Genuine.ISO.Verifier.18.104.22.168.rar
File Size: 32-bit and 64-bit (3.1 MB)
Developer: NebojÅ¡a VuÄinić
How to Install Windows and Office Genuine ISO Verifier
- First of all, check your operating system using (Windows Key + R) and type in the search (DXDIAG) and heat OK, and then check your whole operating system.
- Extract the (Zip, RAR, or ISO) file using WinRAR or by default official Windows command.
- There is no password, or again if there are needed for the password, always password is www.portablebull.com
- Open the installer file by using (Run as Administrator) and accept the terms and then simply install the program.
- Finally, enjoy your program on your PC/Computer.
System Requirements of Windows and Office Genuine ISO Verifier Free Download
Before you put in Windows and Office Genuine ISO Verifier Free Download you want to realize in case your machine meets endorsed or minimal machine requirements.
- Supported OS’s and Office (MSDN, VLSC…) – Includes x32, x64: (All languages)
- Windows 10 – Version 1607 RS1 – MSDN (Updated Jan 19. 2017)
- Windows Server 2016 (x64) – Release Date: 12/01/2017
- Windows Server 2016 (x64)
- Windows 10 – Version 1607 RS1 – LTSB (Updated Jul 2016):
- Windows 10 – Version 1607 RS1 – VLSC (Updated Jul 2016):
- Windows 10 – Version 1607 RS1 – MSDN (Updated Jul 2016):
- Windows 10 – Version 1511 – VLSC (Updated Apr 2016):
- Windows 10 – Version 1511 – MSDN (Updated Apr 2016):
- Windows 10 – Version 1511 – MSDN (Updated Feb 2016):
- Windows 10 – Version 1511 – VLSC (Updated Feb 2016):
- Windows 10 – Version 1511
- Windows 10 – Version 1511 VL (VLSC)
- Windows 10
- Office 2016 RTM (MSDN) and VL (VLSC)
- Windows XP
- Windows Vista
- Windows 7
- Windows 8
- Windows 8.1
- Windows 8.1 with Update
- Office 2003
- Office 2007
- Office 2010 (MSDN) and VL (VLSC)
- Office 2013 (MSDN) and VL (VLSC)
- Office 2016
- Windows Advanced Server
- Windows Essential Business Server 2008
- Windows Home Server
- Windows Home Server 2011
- Windows Server 2003
- Windows Server 2003 R2
- Windows Server 2008
- Windows Server 2008 R2
- Windows Server 2012
- Windows Server 2012 Essentials
- Windows Server 2012 R2
- Windows Server 2012 R2 Essentials
- Windows Server 2012 R2 Essentials with Update
- Windows Server 2012 R2 with Update
- Windows Server Technical Preview
- Windows Small Business Server 2008
- Windows Small Business Server 2011
- Windows Storage Server 2008
- Windows Storage Server 2008 R2
- Windows Technical Preview
- Windows Thin PC
- Windows Server Technical Preview 2
- Free Hard Disk Space: 1 GB of disk space.
- Installed Memory RAM: 1 GB required.
- Processor: Intel®.
Windows and Office Genuine ISO Verifier 22.214.171.124 Portable Free Download
Click on the below blue link to download the new latest offline setup of Windows and Office Genuine ISO Verifier 126.96.36.199 Portable, then enjoy from your software. You can also download Windows 11 Pro 22000.194 TPM / Non-TPM.
Password for file is: 123
|
OPCFW_CODE
|
why doesn't quadratic residue attack work with Elgamal encryption with decisional diffie Hellman assumption?
I was reading this notes http://www.cs.umd.edu/~jkatz/gradcrypto2/NOTES/lecture4.pdf
It's given that discrete log assumption is not enough for semantic security, I'm assuming there maybe chance of getting quadratic residue even if we choose x and r so that both are not even.
But what about decisional diffie Hellman assumption? Isn't it just same? Can't the adversary similarly select one message which is quadratic residue and do the same attack?
The difference is to which groups we think the assumption applies.
The discrete logarithm assumption is thought to apply to groups of the form $\mathbb{Z}_p^*$ with $p$ prime, amongst others. As explained in the linked lecture notes, there exist polynomial time algorithms that determine whether $m_b$ is a quadratic residue. This is sufficient to rule out half of the possible plaintexts, because in groups modulo a prime, exactly half of the elements are quadratic residues. Given this counterexample, the discrete logarithm assumption is consequently not sufficient to achieve semantic security.
The Decisional Diffie-Hellman assumption does not apply to groups $\mathbb{Z}_p^*$ with $p$ prime, but it is thought to apply to other groups -- the ones typically use for ElGamel. The DDH assumption is a stronger assumption that applies to some of the groups where the discrete logarithm assumption holds.
In particular, if the DDH assumption holds, we can prove that there cannot exist a polynomial time algorithm that determines whether an element is a quadratic residue. This necessarily disproves the DDH assumption for groups $\mathbb{Z}_p^*$ with $p$ prime.
So long story short, if the DDH assumption holds, then the attacker cannot perform the attack using quadratic residues in polynomial time, or in fact even distinguish chosen plaintexts in polynomial time. However, the DDH assumption does not hold for all groups where the discrete logarithm assumption holds, and for some of those groups where the DDH assumption does not hold, an attacker can perform the attack in polynomial time.
Thank you very much for your answer, I've understood that because of DDH assumption we cannot distinguish random and DH, therefore we can't distinguish them with quadratic residue either. But in discrete log assumption, isn't it simple to just make sure g^(xr) is not a quadratic residue? Then we can't distinguish the messages even if one of them is quadratic residue
@user41965 Modulo a prime, the product of two non-residues is a residue, and the product of a residue and a non-residue is a non-residue. So if we make sure that $g^{xy}$ is a non-residue, we can simply determine whether $g^{xy}m_b$ is a residue, and from that immediately conclude whether $m_b$ is a residue or not.
Thank you very much, I'm assuming product of two non residue is a residue because if p is prime, half of them are quadratic residue
@user41965 The proof is a little different. If $p$ is prime, there exists a generator $g$. An element $g^a$ is a quadratic residue if and only if there exists a $b$ such that $g^a=(g^b)^2=g^{2b} \mod p$, if and only if $a$ is even. Now if $g^a$ and $g^b$ are both non-residues, then $a$ and $b$ are odd, but $a+b$ is even, so $g^ag^b=g^{a+b}$ is a quadratic residue.
|
STACK_EXCHANGE
|
Today, let’s have a look on one of the most non-friendly predators, snakes. They’re everywhere, on deserts, forests, oceans, streams, and even in the lakes except Antarctica. Some are poisionous, some are without poision, but, with deadly body features.
Irrespective of any situation they still survive. They’re the essential part of the universal food-chain.
1. How many species of snake in the world?
The most interesting question is, did you know how many snakes species are their in the world? And, the answer is more than 2,900 species. These are the categories of of snakes in the world you can find everywhere.
2. Did you Know what is the longest snake in captivity ever?
The longest snake – ever (captivity) is Medusa, a reticulated python (python reticulatus), and is owned by Full Moon Productions Inc. in Kansas City, Missouri, USA. Reticulated pythons – named as such because of the grid-like pattern of its skin. Image Credits: Guinness World Records
3. Did you Know what is the rarest snake in the world?
Snakes are everywhere, but, not all. Some of them are extremely rare, you might not ever heard about them. The rarest snake is the Antiguan racer (Alsophis antiguae) of which fewer than 150 are now believed to exist.
According to Wikipedia, The Antiguan racer (Alsophis antiguae) is a harmless rear-fanged (opisthoglyphous) grey-brown snake that was until recently found only on Great Bird Island off the coast of Antigua, in the eastern Caribbean. It is among the rarest snakes in the world.
You may also like: 82+ random fun facts that will blow your mind
4. There is an island full of snakes where no one's allowed to visit.
The deadliest place on earth is located in Brazil. Brazil is known for its natural resources, jungles and all. There is this island named Snake Island where you will find the big amount of snakes.
This place is full of venomous pit vipers, which are the deadliest predetors. The island is situated off the coast of Brazil in the Atlantic Ocean. The rising sea levels covered up the land that connected it to the mainland and due to this the snakes got trapped there.
The island is extremely small, it consists just 43 hectares (106 acres), forest area which is restricted to people by Brazilian authorities. The snakes are extremely posionous so in order to protect humanity the place is unhabitated and can’t be available for tourists.
You may also like Top 7 Banned Places Around The World You Should Know About
5. Snakes are carnivorous and they live by eating other animals.
6. Did you know snakes are not actually deaf?
As a normal person you might have many assumptions about snakes, like, are snakes deaf, do snakes have ears, can snakes hear a voice, and so on? There is this myth that snakes are deaf, which is not true. Of course, they don’t have visible ears, but have fully formed inner ear structures.
They don’t have eardrums. Instead, their inner ear is connected directly to their jawbone, which rests on the ground as they slither. They sense the the footsteps of predators or prey, cause vibrations in a snake’s jawbone, relaying a signal to the brain via that inner ear.
This is how they listen to the footsteps of pradators and preys and take actions accordingly. Head here to read interesting information and facts.
7. Fastest land snake - Black Mamba Dendroaspis polylepis
Black Mamba Dendroaspis polylepis is the fastest snake on planet with average speeds of 16–19 kilometre(s) per hour (10-12 mph). Impressive! It can easily beat a bicycle rider / provider😉.
So, these were some of the interesting facts about snakes.
If you find them interesting, kindly, consider to share with your friends. If you’ve something to add to the post or want to suggest anything, get in touch with us in the comments section.
|
OPCFW_CODE
|
Pivot tables are a way of slicing and dicing data in a spreadsheet. However, they can be complicated and time-consuming, not to mention…well, just a little dull and tired-looking.
An alternative to pivot tables?
With just a few clicks, Watson Analytics lets you explore, dig deep, and create striking data visualizations that tell the same story as pivot tables. But, it does it much more engagingly. It’s like going from black-and-white to color. Upload data from a connected database or spreadsheet and off you go into a brilliant alternative to pivot tables!
1. Helps non-technical users do data exploration
Even someone with no experience with pivot tables can, by default, get all the same benefits—data exploration, data filtering and segmentation and automated calculations—by using Watson Analytics. Click the screen to interact with the data. Basically, it’s a friendlier, more up-to-date way to slice and dice.
A normal pivot table compared to a discovery created in Watson Analytics. Which do you find more appealing? In addition to being eye-catching, Watson Analytics visualizations are interactive. Click, dig deeper, and explore.
2. Provides automated insights
Watson Analytics has an advanced, automated data discovery feature. Therefore, when you upload data, you get cognitive, unbiased insights. In other words, Watson Analytics determines,the most important elements and relationships in the data and delivers them to you. So, if you’re looking for something in particular (as you might be with pivot tables), you can find it by manually playing with the data.
What unbiased data insight grabs your attention? Let Watson Analytics be your guide.
3. Feeds curiosity, encourages deep-dive data analysis
Watson Analytics can be a nagging partner—but in a good way. When you generate one data visualization, it’s not content to leave it there. Instead, it suggests further discoveries (the Watson Analytics name for data visualization) you can explore. As a result, these discoveries can lead you on an adventure that can ultimately yield data analysis gold: insights that fundamentally impact your business.
For more insights into your data, follow the Discoveries tab.
4. Makes it easy to create and share dashboards in jig time
Part of the challenge of presenting actionable data is engaging and exciting others with your discoveries. You can construct any number of pivot tables in worksheets. However, for the uninitiated, in this format, the data can be hard to decipher. Watson Analytics has a display function that lets you create dashboards and infographics that tell a visual, easy-to-understand story. Moreover, you can email them to colleagues or post them on a website for maximum impact, interaction or feedback.
A dashboard output from Watson Analytics, which can be shared by email, inserted in a deck or on a website.
5. It costs nothing to use Watson Analytics
If you’re interested in exploring Watson Analytics as an alternative to pivot tables, sign up for a 30-day, no-obligation trial of the full-functionality Watson Analytics Professional edition.
|
OPCFW_CODE
|
We have a very small team developing a software like Autocad.
We will be starting a "UX department" (1 guy), and we are going to do UX research with users, but we also need to a platform that will give us simple reporting and insights.
I only have experience with web analytics (such as Google Analytics, Mouseflow and Crazyegg).
We use a dongle USB system for authentication and soon we will be moving away from that and do online authentication just like Adobe does. I know I'm asking for too much but it can't be that expensive too.
In an ideal world, that application would have some sort of API that could be connected to a business intelligence application - we also don't have a dashboard with metrics for accounting, finance, tech support, marketing. Just Excel files.
All we need is a simple system, or multiple systems, that we can easily track what is really important. We don't have a data team. The company is full of smart people.
Also, Is there anything new in terms of predictive analytics or cognitive analytics that can help us out? So we don't need a full-time data scientist?
Sorry I'm a designer with basic web analytics skills
Have you looked in to Crystal Reports at all? It has numerous advantages.
It will connect to all of the data sources you have including spreadsheets, and can integrate data from more than one source at a time. i.e. you may have most of your data in a spreadsheet, with some additional info in Access and a little more in ...
The Crystal Engine itself is available via API within certain SDK's. I know it's available in MS Visual Studio. It is also available in Java.
If you build your own solution using the API you can custom connect to each of your data sources and distribute the results however you need them - via email, ftp or file save - as pdf, excel, HTML, and so forth. You can even use the reporting tool as a Data Transformation Service by writing the results back to an ODBC database.
This platform is more than capable of reporting on data you've collected via your user interface such as clicks, time between clicks, undo, etc. Anything you want to track - as long as it's stored somehow.
I hope that helps. If you need further examples or details feel free to arrange a call, I'd be happy to help.
I developed many analytics systems with capability of processing large datasets and can help you achieve the same. Give me a call or message and we can discuss it further.
Depending on your preference and skillset you cannot if there is a there is a problem there is probably a package that has already been developed to do exactly what you are trying to do. For the most popular programs forPython) there are hundreds are packages you can use out here.
For your business needs there is no one size fits all for BI solutions but depending on the skill set there I would employ that tool.
Other wise I would look at a tool like Power BI that allows for reporting and Data Visualizations and you can also utilizing program packages for analysis.
|
OPCFW_CODE
|
It may have made the news here and there already, but since this place never got its own System Shock series subforum, I thought it wouldn't hurt to write about it in the General RPG forum.
Something.. weird but fairly cool has been happening to the SS2 community over the past six months. Late 2012, over a decade since the demise of TTLG, engine patches from an anonymous source started to spread around the internet, introducing 'advanced' DirectX support in the form of shaders, mipmaps, light mapping, multisampling, bloom and many other things for all Dark Engine games (mainly the thief series and System Shock 2). Nobody is exactly sure who made it, but evidence points at it being a former TTLG employee, having extensive knowledge and access to the normally inaccessible engine source code.
The patch can be obtained through installing [url='http://www.systemshock.org/index.php?board=15.0']SS2Tool[/url] (only works if you don't install the game in /Program Files/, so use custom installation options in Steam), which auto updates a fresh installation with all official and unofficial patches. Combined with [url='http://www.systemshock.org/index.php?topic=4447.0']existing[/url] HD texture mods and monster/item/weapon replacers, it turns SS2 into a proper, modern game.
Note that the advanced visual effects can be enhanced (and sometimes enabled) through the editing of the new cam_ext.cfg in the program's directory. This file provides its own documentation of each setting, so feel free to experiment with it.
The following screenshots may include additional mods.
(ignore the red square in that last one, I just like the bloom in it - that's what I get for swiping these from the SS2 forums instead of making my own screenshots)
EDIT: I just realized none of these screenshots include any of the new monster models - my apologies.
You understand that since this engine patch was released, the entire SS2 modding community has suddenly jumped to warp speed trying to incorporate all the new features, so it's worth checking out additional mods. It's quite amazing something like that can still happen for a game after a decade. Very soon now, the SS2 community will start releasing single 'System Shock Community Patches (SCP), which promise to actively employ a lot of the new engine features with one of the major replacer mods. If you're not too hasty, it might be worth waiting for that to be released first, so bookmark those forums. You can always play [url='http://www.systemshock.org/index.php?topic=211.0']System Shock Portable[/url] (with a mouselook mod!) in the mean time.
Oh for anyone who needs to know, all this works just fine under Linux/Wine.
|
OPCFW_CODE
|
/**
* Copyright (C) 2014 - present by OpenGamma Inc. and the OpenGamma group of companies
*
* Please see distribution for license.
*/
package com.opengamma.sesame.function;
import java.lang.reflect.GenericArrayType;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.regex.Pattern;
import org.apache.commons.lang.StringEscapeUtils;
import com.google.common.reflect.TypeToken;
/**
* Helper methods for use when creating {@link ParameterType} instances and converting between arguments and strings.
*/
public final class ParameterUtils {
private static final Pattern WHITESPACE = Pattern.compile("\\s");
private ParameterUtils() {
}
/**
* @param type a type
* @return true if the type is a parameterized {@link Collection}, {@link List} or {@link Set}
*/
public static boolean isCollection(Type type) {
// we're only interested in parameterized types because we need to know the element type
if (!(type instanceof ParameterizedType)) {
return false;
}
Class<?> rawType = TypeToken.of(type).getRawType();
return rawType == Collection.class || rawType == List.class || rawType == Set.class;
}
/**
* @param type a type
* @return true if the type is an array
*/
public static boolean isArray(Type type) {
return (type instanceof GenericArrayType) || ((type instanceof Class<?>) && ((Class<?>) type).isArray());
}
/**
* @param type a type
* @return true if they type is a parameterized map
*/
public static boolean isMap(Type type) {
// we're only interested in parameterized types because we need to know the key and value types
if (!(type instanceof ParameterizedType)) {
return false;
}
Class<?> rawType = TypeToken.of(type).getRawType();
return rawType == Map.class;
}
/**
* Returns the element type of an array or parameterized collection ({@link Collection}, {@link List} or {@link Set}).
*
* @param type the array or collection type
* @return the type of the element in the collection or array
* @throws IllegalArgumentException if the type isn't a collection or array or if the element can't be found (e.g.
* if the type doesn't have a type parameter)
*/
public static Class<?> getElementType(Type type) {
if (!isCollection(type) && !isArray(type)) {
throw new IllegalArgumentException("Type must be a collection or array");
}
if (type instanceof GenericArrayType) {
Type genericComponentType = ((GenericArrayType) type).getGenericComponentType();
return TypeToken.of(genericComponentType).getRawType();
}
if ((type instanceof Class<?>) && ((Class<?>) type).isArray()) {
return ((Class<?>) type).getComponentType();
}
if (type instanceof ParameterizedType) {
ParameterizedType parameterizedType = (ParameterizedType) type;
Type[] typeArguments = parameterizedType.getActualTypeArguments();
// this shouldn't ever happen
if (typeArguments.length != 1) {
throw new IllegalArgumentException("Container must have one type argument " + type);
}
return TypeToken.of(typeArguments[0]).getRawType();
}
throw new IllegalArgumentException("Can't get element type for " + type);
}
/**
* Returns the type of the keys in a generic map.
*
* @param type a map type
* @return the type of the map's key
* @throws IllegalArgumentException if the type isn't a map or the key type can't be found
*/
public static Class<?> getKeyType(Type type) {
return getMapTypeParameter(type, true);
}
/**
* Returns the type of the values in a generic map.
*
* @param type a map type
* @return the type of the map's key
* @throws IllegalArgumentException if the type isn't a map or the value type can't be found
*/
public static Class<?> getValueType(Type type) {
return getMapTypeParameter(type, false);
}
/**
* Returns a type parameter from a parameterized map type.
*
* @param type the type, must be a parameterized {@code java.util.Map}.
* @param keyType true if the key type is required, false if the value type is required
* @return the key or value type parameter
* @throws IllegalArgumentException if the type isn't a map with two type parameters
*/
private static Class<?> getMapTypeParameter(Type type, boolean keyType) {
if (!isMap(type)) {
throw new IllegalArgumentException("Type isn't a map. ");
}
ParameterizedType parameterizedType = (ParameterizedType) type;
Type[] typeArguments = parameterizedType.getActualTypeArguments();
if (typeArguments.length != 2) {
throw new IllegalArgumentException("Expected 2 type arguments. ");
}
int argIndex = keyType ? 0 : 1;
Type valueType = typeArguments[argIndex];
return TypeToken.of(valueType).getRawType();
}
/**
* Escapes the string and wraps it in quotes if it contains any whitespace.
*
* @param str a string
* @return the escaped string, surrounded with quotes if it contains any whitespace
*/
public static String escapeString(String str) {
if (WHITESPACE.matcher(str).find()) {
return "\"" + StringEscapeUtils.escapeJava(str) + "\"";
} else {
return StringEscapeUtils.escapeJava(str);
}
}
}
|
STACK_EDU
|
UsePagination / UseFiltering causes dropped data for 1:N relationships
Is there an existing issue for this?
[X] I have searched the existing issues
Describe the bug
I am using pagination/filtering/sorting with an IEnumerable output from my MySQL database (using Pomelo).
When I issue a GraphQL query to filter upon some query input, the filter is matched and the correct items are returned, but the nested arrays are cleared (empty), despite the fact that the where clause required that they not be empty. In fact, it appears as though the Lists are Clear()ed some time between the application of the filter and the returning of the data.
Steps to reproduce
GraphQL endpoint:
[UseFurDataContext] [UsePaging] [UseFiltering] [UseSorting]
public IEnumerable<Furball> SearchFurballs([Service] FurContext context) => context.Data.Furballs.IncludeDefaultFurball();
Where Data is a DbContext. The IncludeDefaultFurball performs .Include(fb => fb.Inventory).ThenInclude(c => c.Items). Thus, I am able to successfully filter the output with the query:
query {
searchFurballs(where: { inventory: { items: { some: { itemId: { eq: 66561 } } } } }) {
The top level items returned are correct, but the nested inventory.items are missing from the output:
{
"data": {
"searchFurballs": {
"nodes": [
{
"id": "0x010201070300070a07081800",
"inventory": {
"id": "0x010201070300070a07081800",
"items": []
}
},
{
"id": "0x01040713040001060a010400",
"inventory": {
"id": "0x01040713040001060a010400",
"items": []
}
},
I have verified that there should be items present by also double checking the database, as well as performing other queries on the same objects.
The same bug also happens on any other nested List style field that I attempt to filter upon.
Curiously, the bug can be worked around by adding a id: { neq: "" }, to the where clause. Suddenly, the correct data is returned when I run this query:
query {
searchFurballs(where: { id: { neq: "" }, inventory: { items: { some: { itemId: { eq: 66561 } } } } }) {
Relevant log output
No response
Additional Context?
No response
Product
Hot Chocolate
Version
12.9.0
@zaneclaes i guess you need to use projections to project related entites
this is the way we do includes
@PascalSenn I've never been able to use Projections in any of my HotChocolate projects due to your comment here: https://github.com/ChilliCream/hotchocolate/issues/1658#issuecomment-716068068
I notice now that it was expected to be fixed last year with https://github.com/ChilliCream/hotchocolate/pull/3650 , but I'm still getting the same error with version 12.9.0:
C# code:
[UseFurDataContext] [UsePaging] [UseProjection] [UseFiltering] [UseSorting]
public IQueryable<Furball> SearchFurballs([Service] FurContext context) => context.Data.Furballs;
GraphQL result:
{
"errors": [
{
"message": "Can't compile a NewExpression with a constructor declared on an abstract class",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"searchFurballs"
],
"extensions": {
"code": "NotFound"
}
}
],
"data": {
"searchFurballs": null
}
}
the issue is resolved in the way that you can now project abstract types. the limitation you run into is still there and is not really fixable with how projections work at the moment. I suppose you have a abstract Furball somewhere in your abstractions and a concrete implementation somwhere else, this makes it no possible to project like new Furball() { id = x.id} as the class is abstract. We also do not know the concrete types so we can also not project into new ConcreteFurball() { Id =x.Id}.
in v13 we provide a context for Projections. so that you have easier access to the selected fields (and can include manually)
what's weird is indeed that the includes are ignored. How does the expression look like?
You can flilter in the resolver to have a look at the expression:
[UseFurDataContext] [UsePaging] [UseFiltering] [UseSorting]
public IEnumerable<Furball> SearchFurballs([Service] FurContext context, IResolverContext rc) => context.Data.Furballs.IncludeDefaultFurball().Filter(rc).Sort(rc); <-- what is this expression
I suppose you have a abstract Furball somewhere in your abstractions and a concrete implementation somwhere else
No, I do not. The base abstract class does not share a name with the concrete implementation.
what's weird is indeed that the includes are ignored.
I don't think that's true. Notice that the where clause is filtering on the included fields!! There is no way the query would have returned any data at all if the Includes were ignored. HotChocolate must be clearing the included data at some point between the where clause and returning the data.
You can flilter in the resolver to have a look at the expression:
Where should I see the expression? I added that but don't see any logs or anything.
It seems the problem with Projections is nested abstract relationships.
In the query I gave, the items are an array of abstract GameItem objects. If I remove inventory.items from the GraphQL selection set, the query works:
And including the items breaks it:
Where items are:
public List<GameItem> Items { get; set; } = new List<GameItem>();
And abstract GameItem is a unique class name (not shared by any other class).
@zaneclaes is GameItem a graphql interface/union?
hm... i see you problem.
So projecting gamestats would resolve the whole list of gamestats, so even if that would work, then we would load unnecessary columns.
In v13 we add support for expression in filtering and sorting. We also want to add expression support to projection, so that you could actually express your resolver as a expression (GameStats.Last). This would make you able to project these things.
The Problem we have there is that currently we project directly into the model, essentially .Select(x => new Furball { Id = x.Id). This works great for Models, but as soon as you have fancy resolvers on them, we have to backing field to project.
What we plan to do is to rewrite how projections work internally so that we can project everything we want.
But to implement this we need a bunch of changes to the execution engine.
|
GITHUB_ARCHIVE
|
A little over a month ago I installed PSE 2020 on my Windows 10 Pro PC. I imported my meager 8,000 photos into the Organizer and all has gone well since.
This morning I looked at the PLACES tab and zoomed in to the area where I live and travel. I noticed that several locations that I'd taken photos at did not show on the map. It occurred to me that those photos were taken with my Canon DSLR camera that does NOT have GPS.
So, I used a program called GeoSetter to add the geolocation data to the photo files. It has a map where I can pinpoint the location and add the location coordinates to the photo file. I added GPS data to several hundred photos.
After I did that I opened the Organizer PLACES tab but the photos I added the GPS data to still did not show on the map! I had to delete those photos from the catalog and re-import them before they would show up on the map.
Is there a better way to have the Organizer "refresh" or "re-read" its image data?
Elements Organizer reads media data during import, that is the reason the GPS data added externally by you was not visible on the already present media in the catalog.
However, you can use Elements Organizer itself for adding GPS data to all your media.
If the photos that you import don't have GPS information, you can associate the imported photos with places on the map by a simple drag-and-drop or by searching for the places.
You can also associate videos with places on the map by drag-and-drop or by searching for the place.
Please refer to the below link for further details:
Thanks for your reply Priyanshi! I've tried using the Search on the Places tab - it does not find many of the locations I searched for. One of the places I searched for is pinpointed on the map, but a search does NOT find it.
That's a very good article that you provide the link to. I'll give it a thorough read later today. Maybe I can make it work.
There are a number of ways in which media file information can be changed (IMHO) more easily and more specifically than through PSE. There are some tools that are better/faster/more function rich than PSE. I don't mean to open a debate here.
But, the original question was, is there a way to force the Orrganizer to 'refresh' the media file information that it stores in its database?
OK- I hate to bring up such an old thread, but was searching for exactly this same answer. Finally found it in a totally unrelated post. Select the photo(s) that you added GPS data to, right click, then choose update thumbnails. Pics now show up on Places. Probably too late for you, but hopefully will help someone else in the future...
|
OPCFW_CODE
|
mdadm -A /dev/md0 /dev/sda1 /dev/sdb1
/dev/sdb1 was loaded into the array though. I have a few more arrays on the same two drives too. Each time the partition on
dmesg told me that
sda was out of sync... Since this was from a rescue cd. I've disconnected
sda (hardware wise) for the time being, since it was preventing me from booting.
How should I proceed? Is this likely the cause of a borked drive? I had some weird fs issues the other day I couldn't track down (maybe a precursor): missing files that later magically re-appeared. Maybe a missing cable?
The main question is how do I try to re-sync the drive?
cat /proc/mdstat Personalities : [raid10] md3 : active raid10 sda4 955683840 blocks super 1.2 512K chunks 2 far-copies [2/1] [_U] md2 : active raid10 sda3 10483712 blocks super 1.2 512K chunks 2 far-copies [2/1] [_U] md1 : active raid10 sda2 10484736 blocks 512K chunks 2 far-copies [2/1] [_U] md0 : active raid10 sda1 101376 blocks 512K chunks 2 far-copies [2/1] [_U] unused devices: <none>
badblocks on the whole other drive, and a long
smartctl test, it found no problems.
a request output of
mdadm -D /dev/md0 (I have md0-3 if others are needed)
/dev/md0: Version : 0.90 Creation Time : Mon May 31 20:24:14 2010 Raid Level : raid10 Array Size : 101376 (99.02 MiB 103.81 MB) Used Dev Size : 101376 (99.02 MiB 103.81 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Oct 25 07:58:25 2010 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Layout : far=2 Chunk Size : 512K UUID : 30ffe1d2:f5759995:820bb796:b5530bd2 (local to host slave-iv) Events : 0.212 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 1 1 active sync /dev/sda1
Since I've found no actual issues with the drive, but obviously something went wrong, I'm wondering what I should do next? as of today a full backup of important data is in place
update 2 Whenever I try to add what was
sda back in (at least without wiping it) it screw's up my boot process with a kernel magic number error. I'm guessing because the kernel version got out of sync. currently this drive is in an external enclosure as
sdd. Should I re-add (re-sync) this drive while it's connected via usb? will that 'cause problems?
df Filesystem Size Used Avail Use% Mounted on udev 10M 284K 9.8M 3% /dev /dev/md1 9.9G 7.0G 2.4G 75% / shm 3.0G 1.5M 3.0G 1% /dev/shm /dev/md0 96M 15M 77M 16% /boot /dev/md2 9.9G 6.5G 3.0G 69% /var /dev/md3 898G 451G 402G 53% /home none 1.0G 45M 980M 5% /tmp /dev/sdb1 992M 36M 956M 4% /media/D4A4-B7C1
each md drive has an sda/sdb corresponding. it was the sda drive (or 0 drive) in the array that I had to pull.
|
OPCFW_CODE
|
The Bright Data approach to high-quality data
Bright Data’s proactive approach to validated data ensures that any deviation from predefined standards is caught early, reducing the risk of data corruption or misuse.
By defining clear validation rules, we are able to maintain a strong foundation for data quality that supports accurate analytics, confident decision-making, and ensuring compliance with industry standards.
What is data validation?
Data validation refers to the process of ensuring the accuracy and quality of data. Validating data confirms that the values entered into data objects conform to the constraint within the dataset schema. The validation process also ensures that these values follow the rules established for your application. Validating data before updating your application's database is a good practice as it reduces errors and the number of round trips between an application and the database.
Why is it crucial to validate the data?
Data providers must maintain rigorous quality control measures and offer ongoing support for data-related issues so businesses can trust their data validation processes and expertise.
- Accuracy: Businesses must ensure the data they purchase is accurate and error-free, as inaccurate data can negatively impact decision-making, analysis, and overall performance.
- Completeness: The dataset should be comprehensive and contain all the relevant information to address the business's specific requirements.
- Consistency: To facilitate efficient integration and analysis, all data sources and records must follow uniform formats, naming conventions, and measurement units.
- Timeliness: Up-to-date and relevant data is essential, as outdated or stale data may not provide the desired insights and lead to wrong decisions.
How do we ensure high-quality data?
Our validation process consists of several stages, each focusing on a different data collection aspect.
Stage #1 Accuracy: Schema Validation
The first step is to define each field's schema and expected output. Each collected record goes through schema validation. Is it the right data type? Is this field mandatory or empty?
During setup, we define the field schema and expected output
- Data type (e.g., string, numeric, bool, date)
- Mandatory fields (e.g., ID)
- Common fields (e.g., price, currency, star rating)
- Custom field validation
The dataset is created after the records are validated based on the defined schema and field output.
Example: For a field like is_active, which is expected to be boolean, the validation will check whether the value is True or False. The validation will fail if a value is 'Yes,' 'No,' or any other value.
Stage #2 Completeness: Dataset Statistics
This stage evaluates the dataset's key statistical attributes to ensure data quality, completeness, and consistency.
- Filling rate (%): Assesses the dataset's overall filling rate against expected (based on sample statistics) values for each field. Filling values must meet a minimum percentage.
- Unique values (#): Ensures that any field and the unique ID values meet the required validation criteria, i.e., the number of unique values against expected. The dataset must contain a minimum percentage of unique values.
- Dataset size Minimum Records Threshold (#): Reflects the number of expected records. Minimum X records are required for the initial dataset, and fluctuation within +/- 10% is checked.
- Persistence Validation: Once a field is populated, it becomes mandatory and cannot be left empty in subsequent entries. This ensures data consistency and completeness. If an attempt is made to leave the field empty after initial data entry, an error is triggered, prompting the user to provide the necessary information or justify the omission.
- Type Verification: Rigorously checks the data type of each entry against the designated field type, be it string, number, date, etc. This ensures data integrity and prevents potential mismatches or errors during data processing. When a mismatch is detected, the system flags it for correction before further processing.
As we transition from assessing the dataset's statistical properties in Stage 2, we move on to implementing a process for updating and maintaining the dataset in Stage 3, which ensures its continued relevance and accuracy over time.
Stage #3 Continuous Monitoring
- The final data validation stage refers to maintaining the dataset based on website structure changes and updated or new records. This stage ensures the relevancy and accuracy of the dataset over time.
- Identify errors and outliers by comparing newly collected data with previously collected data.
Any validation failure will be reported to us via an alert mechanism.
Data is great only if it is reliable
With Bright Data, rest assured that your datasets are of the highest quality and integrity, resulting in improved insights and better informed decisions.
|
OPCFW_CODE
|
Le Tue, Aug 05, 2003, à 09:24:25AM -0500, Lars Clausen a écrit:
> On 5 Aug 2003, Sven Vermeulen wrote:
> > Hi,
> > I have some features that I'd like to see in DIA and love to help with,
> > but I would like some feedback on things.
> > The first feature is a double-line for the normal line. Currently, dia
> > has LINESTYLE_SOLID, LINESTYLE_DASHED, LINESTYLE_DASH_DOT,
> > LINESTYLE_DASH_DOT_DOT, LINESTYLE_DOTTED in the LineStyle. I'd like to
> > have a LINESTYLE_DOUBLE or similar in it. However, such a
> > LINESTYLE_DOUBLE would probably require lots of tweaks (for instance the
> > ending points of the line (ARROW_*)). Is it possible in the current code
> > to "easily" add such a double-line? Or would it be preferable to create a
> > special shape for it?
> > If a seperate shape is advised, how can I make sure that this shape has
> > the same possibilities as the current line wrt the ending points
> > (ARROW_*) without creating a shape for each possible combination?
> I don't think the arrows are so much of a problem, they can already handle
> wide lines (which essentially this is), but rendering can be tricky. It'd
> be easy to make a wide black line with a narrower white line on top, that'd
> even work with most output formats and with all the line types. However,
> if you want to be able to see through the middle, it gets tricky for the
> non-straight lines.
Well, except for the non-straight part, I did something like that for the
GRAFCET - Vergent object (when in "And" mode). What I did was offset the
main line by half a width on either side, and draw two lines instead (so the
total width is 3 * the basic line width).
I don't think it's that complicated to do polylines either; all you need is
to offset the points along the bisector (you need to offset by
normal_offset/cos(angle_between_segments) and that's it).
For Beziers, it gets a little more interesting, but can be handled
relatively easily too as we handle only a specific case of Beziers: you want
to offset the end points the same way as we'd offset points in a
line/polyline, and offset the tangency control points so that the new
tangent is parallel to the unoffset tangent (so you compute the endpoint
offset vectors, and apply them to both the endpoints and their nearest
> > The second feature is regarding the middlemousebutton menu: currently, if
> > you press the middle mouse button on a line, it gives you
> > Line
> > -----------------------
> > Add connection point
> > Delete connection point
> > I'd like to have extra options that add a bridge, meaning that two lines
> > that cross each other cannot confuse the users about whereto what line
> > goes. As an image sais more than a thousand words, please view
> > http://studwww.ugent.be/~sjvermeu/dia-0001.png. Sorry for the freehand
> > draw :)
> Yes, I see it. That is a very useful thing for clarifying diagrams.
> > That bridge should be moveable though, so it's like "Add corner" in the
> > polyline, only it's not a corner but some small arc.
> While it could be moveable by hand as easily, it'd probably make more sense
> to have it be associated with the crossing line, so that if either line
> moves, the bridge is moved accordingly. In fact, that style would be
> easier internally, as it doesn't require adding a new handle.
Sort of a "draw a bridge if crosses another line" flag on the line, coupled
with a "bridge radius" property (or perhaps ratio-to-line-width).
I'm not totally sure whether this would be better done in the standard line
& polyline, or whether it would be better to create a specialised "electric
engineering polyline" (and ortholine) object.
This new object would have a connpoint_line on each of its segments (so that
you could add connection points where convenient), would have this "bridge"
flag and radius/ratio, and the next logical step is to make it draw a dot on
connected connection points.
Whoever implements this, please keep the normal line attributes, so that it
can be re-used in connex fields (pneumatic/hydraulic spring to my mind).
> It's not going to happen in this upcoming release, but it's worth keeping
> in mind. For electronics diagrams, it's an essential feature.
seconded and more (don't think only in terms of electrons, there are other
interesting particles to move)
|
OPCFW_CODE
|
An Esoteric Guide to Spencer Brown’s Laws of Form #2
As GSB indicates,
LoF p. 3
- What is not allowed is forbidden.
This is to say that we can only use distinctions that we make; therefore we must complexify our understanding of the first distinction: we must distinguish anew the first distinction. Glanville, with Francisco Varela, recognized this problem. They saw that it led to an infinite regress (the bane of logic, which we are supposed to avoid), and their solution was to say that “distinction cannot cleave a space, and its value must not be distinct from its mark, that is, a distinction distinguishes (is) itself.” (Glanville, The Self and the Other: The Purpose of Distinction p. 1).
Yet he notes that that the distinction, which only distinguishes itself, also implies a manifold. Actually, we see that distinction must imply a manifold (it cannot ONLY be itself distinguishing itself, simply) or existence would never have any content other than existence itself — that is, no distinction of difference beyond that of the fact that existence exists.
Thus the first distinction must be a complex distinction; it must, we could say, keep the Chaos alive, it must maintain potential in more than one way. This is to say that it must be a complex unity, a single diverse manifold.
Now we have noted that Glanville wants to say that in order to avoid an infinite regress of distinctions that never reach any value, the distinction must distinguish only itself. But we can point out that this does not actually solve the problem, it rather only is a viewing of the problem from a different perspective. He thus ends up saying that:
“If, in drawing a distinction, we do not distinguish between the mark and the value but take the value as being the mark, we have no regress. The distinction is then a self-distinction, and the value (which is the mark) is the value to the self. It is not accessible to the outsider. It remains private. We have, instead, Objects that distinguish themselves and do not cleave a space: they do not even need a space within or without. Thus, the Object (its self itself) maintains itself, but is alone. The Object is the Object is the Object” (The Cybernetics of Value… p. 5).
This is a view which is in direct contradiction to esoteric experience… but such is not the basis upon which we need to address Glanville’s idea. We can rather point out that his attempt to avoid a regress actually fails, in the sense that the regress is successfully eliminated. It is, rather, simply transformed into a recursion, which still has a ‘regressive’ nature. However, that nature is, instead of played out extensively, is played out intensively, as in a point rather than a line.
And now we can bring back in the geometric imagination from earlier, and make use of it. Geometrically, the analogy for a normal regression is that of an infinite line. But every line is ALSO a circle whose center is at infinity. One can contract that circle (the line) from infinity (extensive space / no space) into finite (intermediate / actual) space, and then contract it further until the circle becomes a point (intensive space / no space). In other words, Glanville’s choice to avoid regress is, in some sense, unnecessary; the point-like logic is a transformation of the line-like logic, through the logic of the circle. Thus we are not restricted to our interpretation of the mark, of distinction, as that which EITHER does or does not “cleave a space”: it BOTH cleaves a space and DOES NOT cleave a space, depending upon the distinction of the distinctions. That is to say, by distinguishing the original logic of the mark of GSB from the modified “möbius logic”, Glanville is led to a lower order content (the value of the distinction), whereby “the Object is the Object is the Object” is considered as a mark marking itself in a way that is inaccessible to any outsider. But this is only a part of the story that is given by the distinction between the logic of the line and the logic of the möbius, which is the logic of the intensive point. By exploring the logic of this intensive point, we can change the way we distinguish distinction.
We have seen that the infinite regress seemed to be required by the failure of the mark to distinguish itself from its value. But to try to solve this by saying that the mark ONLY marks itself, and is thus its own value, is an error. If a mark can only mark itself, it is equivalent to being valueless, even to itself, because in order for the mark to have the value OF itself FOR itself, the mark must again distinguish itself from its value, otherwise we can only say “mark” or “value”, but not both. And here is where we get, as promised, to the need to introduce the N and N+1 business referred to before, as it provides a way to think about this admittedly bizarre and obtuse problem in a way that may continue in our quest to illuminate illumination.
|
OPCFW_CODE
|
Convert System.Linq.IOrderedEnumerable<T> to List<T>
.NET compiler will not implicitly convert System.Linq.IOrderedEnumerable<T> to System.Collections.Generic.List<T>
An explicit cast:
using System.Collections.Generic;
var items = new List<MyType>;
var selectedItems =
from item in items
where item.Active
select item;
return (List<MyType>)selectedItems;
gives this warning:
Suspicious cast: there is no type in the solution which inherits from
both System.Linq.IOrderedEnumerable and
System.Collections.Generic.List
What is best practice here?
Are you sure you need a List? Will IEnumerable, IOrderedEnumerable, ICollection, or IList not do? Generally you should use the least-specific type you can.
that code doesn't return an IOrderedEnumerable<MyType>, show your actual code
Was this a CS0266-error? If so, maybe you could consider it adding it in the title or the body (your edit-queue was full)? Anyway, your question helped. Take care & good luck.
Simply use the ToList extension:
return selectedItems.ToList();
You should be aware though: best practice (since you asked) would actually want you to return an IEnumerable<MyType> in most cases. Therefore, you may want to change your signature in this way:
public IEnumerable<MyType> MyFunction()
{
// your code here
}
And THEN, if you need to, have the function's result in a list:
var myList = MyFunction().ToList();
Unless you have a very precise reason of returning a List<> type, I strongly suggest that you don't.
Hope that helps.
He didn't show the method's signature, which may very well return an IEnumerable<T>. In which case, calling ToList is the standard for avoiding delayed evaluation/multiple evaluations.
The specific reasons between returning a composable IEnumerable<T> and a concrete ICollection<T> come down to whether or not a specific snapshot of those items are needed at invocation time, or if the collection will be iterated many times and does not need to reflect changes in the enumerated source.
@dcastro I am well aware of that but since the question specifically asked for advice about best practices AND wasn't showing the method's signature, I thought it was worth mentionning.
@codekaizen The IEnumerable<> type can also be used as a way to abstract the nature of the returned sequence. It's up to the method to determine whether or not it should return a snapshot; in both cases though, returning IEnumerable<> should be the preferred practice IMHO. Unless, as I said, the method is meant to return a modifiable sequence.
Yea, that is the point - you don't know anything about the sequence when it is returned as IEnumerable<T> - so if you know you need a snapshot with no uncertainty about it, then use ICollection<T> or IReadOnlyCollection<T>.
@codekaizen that goes backwards from what a properly abstracted function should be. It's up to the caller to decide whether or not he cares about the enumerable possibly not being a snapshot. If it does want to guarantee working with a snapshot then he gets to convert it. No way this is the function's responsibility.
@codekaizen allow me to add a precision: the function may want to ensure that the returned result won't be executed again and again, but it still should return an IEnumerable<> type regardless. The return type should not serve any guarantees purposes other than the return value being a sequence. Just look at the Framework and popular libraries, you'll see for yourself that it's the preferred way.
That is true for a library, but that's not the point I'm arguing. This is his function and this is a decision he has to make. There are valid arguments for either pattern, and like any pattern, trade-offs which must be managed. For a library, the abstraction of IEnumerable<T> is almost always correct, but for internal classes it is not so clear, and other factors bear more weight in the consideration.
let us continue this discussion in chat
Use the System.Linq.Enumerable.ToList<T>() extension:
selectedItems.ToList();
|
STACK_EXCHANGE
|
using System;
using PendleCodeMonkey.MOS6502EmulatorLib;
using PendleCodeMonkey.MOS6502EmulatorLib.Enumerations;
using Xunit;
namespace PendleCodeMonkey.MOS6502Emulator.Tests
{
public class MachineTests
{
[Fact]
public void NewMachine_ShouldNotBeNull()
{
Machine machine = new Machine();
Assert.NotNull(machine);
}
[Fact]
public void NewMachine_ShouldHaveCPU()
{
Machine machine = new Machine();
Assert.NotNull(machine.CPU);
}
[Fact]
public void NewMachine_ShouldHaveMemory()
{
Machine machine = new Machine();
Assert.NotNull(machine.Memory);
}
[Fact]
public void NewMachine_ShouldHaveAStack()
{
Machine machine = new Machine();
Assert.NotNull(machine.Stack);
}
[Fact]
public void NewMachine_ShouldHaveExecutionHandler()
{
Machine machine = new Machine();
Assert.NotNull(machine.ExecutionHandler);
}
[Fact]
public void LoadData_ShouldSucceedWhenDataFitsInMemory()
{
Machine machine = new Machine();
var success = machine.LoadData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
Assert.True(success);
}
[Fact]
public void LoadData_ShouldFailWhenDataExceedsMemoryLimit()
{
Machine machine = new Machine();
var success = machine.LoadData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0xFFFC);
Assert.False(success);
}
[Fact]
public void LoadExecutableData_ShouldSetPCToStartOfLoadedData()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
Assert.Equal(0x2000, machine.CPU.PC);
}
[Fact]
public void IsEndOfData_ShouldBeFalseWhenPCIsWithinLoadedData()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
Assert.False(machine.IsEndOfData);
}
[Fact]
public void IsEndOfData_ShouldBeTrueWhenPCIsPassedEndOfLoadedData()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
machine.CPU.PC = 0x2008;
Assert.True(machine.IsEndOfData);
}
[Fact]
public void ReadNextPCByte_ShouldReturnValueWhenPCIsWithinLoadedData()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
var value = machine.ReadNextPCByte();
Assert.Equal(1, value);
}
[Fact]
public void ReadNextPCByte_ShouldThrowExceptionWhenPCIsPassedEndOfLoadedData()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
machine.CPU.PC = 0x2008;
Assert.Throws<InvalidOperationException>(() => machine.ReadNextPCByte());
}
[Fact]
public void SetState_ShouldInitializeState()
{
Machine machine = new Machine();
machine.SetState(A: 123, X: 234, Y: 210, PC: 0x2000, flags: ProcessorFlags.Negative);
Assert.Equal(123, machine.CPU.A);
Assert.Equal(234, machine.CPU.X);
Assert.Equal(210, machine.CPU.Y);
Assert.Equal(0x2000, machine.CPU.PC);
Assert.Equal(ProcessorFlags.Negative, machine.CPU.SR.Flags);
}
[Fact]
public void GetState_ShouldGetState()
{
Machine machine = new Machine();
machine.CPU.A = 123;
machine.CPU.X = 234;
machine.CPU.Y = 210;
machine.CPU.PC = 0x0200;
machine.Stack.S = 0x7F;
machine.CPU.SR.Flags = ProcessorFlags.Carry | ProcessorFlags.Zero;
var (A, X, Y, PC, S, Flags) = machine.GetState();
Assert.Equal(123, A);
Assert.Equal(234, X);
Assert.Equal(210, Y);
Assert.Equal(0x0200, PC);
Assert.Equal(0x7F, S);
Assert.Equal(ProcessorFlags.Carry | ProcessorFlags.Zero, Flags);
}
[Fact]
public void DumpMemory_ShouldReturnCorrectMemoryBlock()
{
Machine machine = new Machine();
var _ = machine.LoadData(new byte[] { 1, 2, 3, 4, 5, 6 }, 0x2000);
var dump = machine.DumpMemory(0x2002, 0x0010);
Assert.Equal(0x0010, dump.Length);
Assert.Equal(0x05, dump[2]);
}
[Fact]
public void GetZeroPageAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40, 0x20, 0x30 }, 0x0000);
byte addr = machine.GetZeroPageAddress();
Assert.Equal(0x80, addr);
}
[Fact]
public void GetZeroPageXAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40, 0x20, 0x30 }, 0x0000);
machine.CPU.X = 6;
byte addr = machine.GetZeroPageXAddress();
Assert.Equal(0x86, addr);
}
[Fact]
public void GetZeroPageYAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40, 0x20, 0x30 }, 0x0000);
machine.CPU.Y = 0x1F;
byte addr = machine.GetZeroPageYAddress();
Assert.Equal(0x9F, addr);
}
[Fact]
public void GetAbsoluteAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40, 0x20, 0x30 }, 0x0000);
ushort addr = machine.GetAbsoluteAddress();
Assert.Equal(0x4080, addr);
}
[Fact]
public void GetAbsoluteXAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40, 0x20, 0x30 }, 0x0000);
machine.CPU.X = 6;
ushort addr = machine.GetAbsoluteXAddress();
Assert.Equal(0x4086, addr);
}
[Fact]
public void GetAbsoluteYAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40, 0x20, 0x30 }, 0x0000);
machine.CPU.Y = 0x1F;
ushort addr = machine.GetAbsoluteYAddress();
Assert.Equal(0x409F, addr);
}
[Fact]
public void GetIndirectAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x80, 0x40 }, 0x0000);
_ = machine.LoadData(new byte[] { 0x20, 0x60, 0x20, 0x30 }, 0x4080, false);
ushort addr = machine.GetIndirectAddress();
Assert.Equal(0x6020, addr);
}
[Fact]
public void GetIndexedIndirectAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x40, 0x00 }, 0x0000);
_ = machine.LoadData(new byte[] { 0x20, 0x60, 0x20, 0x30, 0x40, 0x30, 0x60, 0xA0 }, 0x0040, false);
machine.CPU.X = 4;
ushort addr = machine.GetIndexedIndirectAddress();
Assert.Equal(0x3040, addr);
}
[Fact]
public void GetIndirectIndexedAddress_ShouldReturnCorrectAddress()
{
Machine machine = new Machine();
var _ = machine.LoadExecutableData(new byte[] { 0x42, 0x00 }, 0x0000);
_ = machine.LoadData(new byte[] { 0x20, 0x60, 0x20, 0x30, 0x40, 0x30, 0x60, 0xA0 }, 0x0040, false);
machine.CPU.Y = 0x08;
ushort addr = machine.GetIndirectIndexedAddress();
Assert.Equal(0x3028, addr);
}
}
}
|
STACK_EDU
|
M: I fear, therefore I learn - 0x0aff374668
For years, I have spent countless hours in and out of work learning new computer and electronics skills. From embedded control systems and building op-amp PID filters, from signal processing circuits to learning every new JS framework and boutique language like Go, Swift and Rust; frontend, backend, devops, AWS, digital ocean, Azure... to re-reading new versions of books on the Linux kernel and crypto and networking... just... everything.<p>I do it partially because it is interesting, but in the past few years I've come to realize I do it out of fear. I read articles on online forums like this one and I'm terrified of being turned out to pasture without enough skills, so I focus dedicated effort on learning as much as I can because I'm scared of being destitute in the future. Despite no evidence.<p>Am I alone? Has anyone overcome this career-related hyper-FOMO? Is this a common anxiety?
R: cjfd
I am not sure. I am a c++ programmer. We don't have a sexy new framework every
week or something like that. I am not really worried. I suspect that if I
wanted to I could write c++ until I retire. I changed jobs some time ago to
keep writing c++ instead of being forced into c#. I kind of hate microsoft,
that is mainly why. Rust sounds kind of interesting but I just skimmed the
manual a bit. In my spare time I program coq instead. Just because it is
interesting. I don't have much anxiety.
|
HACKER_NEWS
|
package com.telran.example.manager;
import com.telran.example.model.ContactData;
import org.openqa.selenium.By;
import org.openqa.selenium.NoAlertPresentException;
import org.openqa.selenium.WebDriver;
public class ContactHelper extends HelperBase {
public ContactHelper(WebDriver driver) {
super(driver);
}
public boolean isContactPresent(){
return isElementPresent(By.name("selected[]"));
}
public boolean isAlertPresent() {
try {
driver.switchTo().alert();
return true;
} catch (NoAlertPresentException e) {
return false;
}
}
public void confirmContactCreation() {
driver.findElement(By.name("submit")).click();
}
public void fillContactForm(ContactData contactData, By locator) {
type(By.name("firstname"), contactData.getFirstName());
type(By.name("lastname"), contactData.getLastName());
type(By.name("address"), contactData.getAddress());
type(By.name("home"), contactData.getHomePhone() );
type(By.name("email"), contactData.getEmail());
}
public void initContactCreation() {
driver.findElement(By.linkText("add new")).click();
}
protected void returnToHomePage() {
driver.findElement(By.linkText("home")).click();
}
public void confirmAlert() {
// String alertText = driver.switchTo().alert().getText();
// System.out.println(alertText);
driver.switchTo().alert().accept();
}
public void initContactDeletion() {
driver.findElement(By.xpath("//*[@value='Delete']")).click();
}
public void selectContact() {
driver.findElement(By.xpath("//*[@name='selected[]']")).click();
}
public void openHomePage() {
driver.findElement(By.xpath("//*[@href='./']")).click();
}
protected int getContactsCount() {
return driver.findElements(By.cssSelector("[name='selected[]']")).size();
}
public void selectContactByIndex(int index) {
driver.findElements(By.xpath("//*[@name='selected[]']")).get(index).click();
}
public void clickOnUpdateButton() {
driver.findElement(By.xpath("//input[@value='Update']")).click();
}
public void initContactModification() {
driver.findElement(By.xpath("//img[@title='Edit']")).click();
}
public void createContact() {
initContactCreation();
fillContactForm(new ContactData()
.withLastName("Vasily")
.withFirstName("Ivanov")
.withAddress("Tel-Aviv" )
.withHomePhone("123456789")
.withEmail("aa@dddd.com"), By.name("address"));
confirmContactCreation();
returnToHomePage();
}
}
|
STACK_EDU
|
A Vectorscope is a special type of oscilloscope used in both audio and video applications. Whereas an oscilloscope or waveform monitor normally displays a plot of signal vs. time, a vectorscope displays an X-Y plot of two signals, which can reveal details about the relationship between these two signals. Vectorscopes are highly similar in operation to oscilloscopes operated in X-Y mode; however those used in video applications have specialized graticules, and accept standard television or video signals as input (demodulating and demultiplexing the two components to be analyzed internally).
In video applications, a vectorscope supplements a waveform monitor for the purpose of measuring and testing television signals, regardless of format (NTSC, PAL, SECAM or any number of digital television standards). While a waveform monitor allows a broadcast technician to measure the overall characteristics of a video signal, a vectorscope is used to visualize chrominance, which is encoded into the video signal as a subcarrier of specific frequency. The vectorscope locks exclusively to the chrominance subcarrier in the video signal (at 3.58 MHz for NTSC, or at 4.43 MHz for PAL) to drive its display. In digital applications, a vectorscope instead plots the Cb and Cr channels against each other (these are the two channels in digital formats which contain chroma information).
A vectorscope uses an overlaid circular reference display, or graticule, for visualizing chrominance signals, which is the best method of referring to the QAM scheme used to encode color into a video signal. The actual visual pattern that the incoming chrominance signal draws on the vectorscope is called the trace. Chrominance is measured using two methods—color saturation, encoded as the amplitude, or gain, of the subcarrier signal, and hue, encoded as the subcarrier's phase. The vectorscope's graticule roughly represents saturation as distance from the center of the circle, and hue as the angle, in standard position, around it. The graticule is also embellished with several elements corresponding to the various components of the standard color bars video test signal, including boxes around the circles for the colors in the main bars, and perpendicular lines corresponding to the U and V components of the chrominance signal (and additionally on an NTSC vectorscope, the I and Q components). NTSC vectorscopes have one set of boxes for the color bars, while their PAL counterparts have two sets of boxes, because the R-Y chrominance component in PAL reverses in phase on alternating lines. Another element in the graticule is a fine grid at the nine-o'clock, or -U position, used for measuring differential gain and phase.
Often two sets of bar targets are provided: one for color bars at 75% amplitude and one for color bars at 100% amplitude. The 100% bars represent the maximum amplitude (of the composite signal) that composite encoding allows for. 100% bars are not suitable for broadcast and are not broadcast-safe. 75% bars have reduced amplitude and are broadcast-safe.
Some vectorscope models have only one set of bar targets. The vectorscope can be set up for 75% or 100% bars by adjusting the gain so that the color burst vector extends to the "75%" or "100%" marking on the graticule.
The reference signal used for the vectorscope's display is the color burst that is transmitted before each line of video, which for NTSC is defined to have a phase of 180°, corresponding to the nine-o'clock position on the graticule. The actual color burst signal shows up on the vectorscope as a straight line pointing to the left from the center of the graticule. In the case of PAL, the color burst phase alternates between 135° and 225°, resulting in two vectors pointing in the half-past-ten and half-past-seven positions on the graticule, respectively. In digital (and component analog) vectorscopes, colorburst doesn't exist; hence the phase relationship between the colorburst signal and the chroma subcarrier is simply not an issue. A vectorscope for SECAM uses a demodulator similar to the one found in a SECAM receiver to retrieve the U and V colour signals since they are transmitted one at a time (Thomson 8300 Vecamscope).
On older vectorscopes that use cathode ray tubes (CRTs), the graticule was often a silk-screened overlay superimposed over the front surface of the screen. One notable exception was the Tektronix WFM601 series of instruments, which are combined waveform monitors and vectorscopes used to measure CCIR 601 television signals. The waveform-mode graticule of these instruments is implemented with a silkscreen, whereas the vectorscope graticule (consisting only of bar targets, as this family did not support composite video) was drawn on the CRT by the electron beam. Modern instruments have graticules drawn using computer graphics, and both graticule and trace are rendered on an external VGA monitor or an internal VGA-compatible LCD display.
Most modern waveform monitors include vectorscope functionality built in; and many allow the two modes to be displayed side-by-side. The combined device is typically referred to as a waveform monitor, and standalone vectorscopes are rapidly becoming obsolete.
In audio applications, a vectorscope is used to measure the difference between channels of stereo audio signals. One stereo channel drives the horizontal deflection of the display, and the other drives the vertical deflection. A monaural signal, consisting of identical left and right signals, results in a straight line with a gradient of +1. Any stereo separation is visible as a deviation from this line, creating a Lissajous figure. If a straight line appears with a gradient of −1, this indicates that the left and right channels are 180° out of phase.
|
OPCFW_CODE
|
#include "UIDBTree.h"
#include <iostream>
UIDBTree::UIDBTree(bool duplicatesAllowed)
{
this->duplicatesAllowed = duplicatesAllowed;
Result = new OperationResult();
treeNodeCount = 0;
}
UIDBTree::~UIDBTree()
{
//Recursively delete all nodes via. RAII deconstruction.
if (rootNode.get() != nullptr)
{
rootNode.reset();
}
if (Result != nullptr)
{
delete Result;
Result = nullptr;
}
}
bool UIDBTree::IsEmpty()
{
return (rootNode.get() == nullptr);
}
UIDBNode* UIDBTree::GetRootNode()
{
return rootNode.get();
}
UIDBTree::OperationResult* UIDBTree::GetMaxDepth()
{
unsigned char maxDepth = 0;
UIDBNode* currentNode = rootNode.get();
while (currentNode != nullptr)
{
++maxDepth;
if (currentNode->subtreeMaxDepthBalance <= 0)
{
currentNode = currentNode->leftChildNode.get();
}
else
{
currentNode = currentNode->rightChildNode.get();
}
}
Result->HadError = false;
Result->Depth = maxDepth;
return Result;
}
UIDBTree::OperationResult* UIDBTree::GetLowestNodeByKey(UIDBNode* topNode)
{
Result->HadError = false;
if (topNode == nullptr)
{
Result->Node = nullptr;
return Result;
}
UIDBNode* nextNode = topNode->leftChildNode.get();
while (nextNode != nullptr)
{
topNode = nextNode;
nextNode = nextNode->leftChildNode.get();
}
Result->Node = topNode;
return Result;
}
UIDBTree::OperationResult* UIDBTree::GetHighestNodeByKey(UIDBNode* topNode)
{
Result->HadError = false;
if (topNode == nullptr)
{
Result->Node = nullptr;
return Result;
}
UIDBNode* nextNode = topNode->rightChildNode.get();
while (nextNode != nullptr)
{
topNode = nextNode;
nextNode = topNode->rightChildNode.get();
}
Result->Node = topNode;
return Result;
}
UIDBTree::OperationResult* UIDBTree::FindNodeOrNearbyByKey(ByteVector key)
{
Result->HadError = false;
Result->Node = nullptr;
char comparisonResult;
UIDBNode* currentNode = rootNode.get();
while (currentNode != nullptr)
{
Result->Node = currentNode;
//Compare the current node's key with the given key, to decide what to do next.
comparisonResult = UIDBTree::compareKeys(currentNode->key, key);
if (comparisonResult > 0)
{
//Search key > current key; navigate to the right child.
currentNode = currentNode->rightChildNode.get();
}
else if (comparisonResult < 0)
{
//Search key < current key; navigate to the left child.
currentNode = currentNode->leftChildNode.get();
}
else
{
//Search key == current key; found it!
Result->FoundExactNode = true;
return Result;
}
}
Result->FoundExactNode = false;
return Result;
}
//Returns the sign of the comparison of the two keys: second > first: 1, second < first: -1, second == first: 0.
char UIDBTree::compareKeys(ByteVector firstKey, ByteVector secondKey)
{
return (secondKey > firstKey ? 1 : (secondKey < firstKey ? -1 : 0));
}
|
STACK_EDU
|
#include "table.h"
static HeadNode * create_node(const mpz_t, const mpz_t, const mpz_t);
Table *
table_create(void)
{
Table *table = (Table *) malloc(sizeof(Table));
if (table != NULL) {
table->head = NULL;
table->number_of_rows = 0;
}
return table;
}
Table *
table_copy(Table * const src)
{
Table *dst = table_create();
for (HeadNode *current_row = src->head; current_row != NULL; current_row = current_row->next_row) {
for (Node *current_col = ¤t_row->node; current_col != NULL; current_col = current_col->next_col) {
table_insert_and_merge(dst, current_col->row, current_col->col, current_col->val, &mpz_add);
}
}
return dst;
}
void
table_destroy(Table *table)
{
HeadNode *current_row = table->head;
while(current_row != NULL) {
HeadNode *node_ptr = current_row;
current_row = current_row->next_row;
while (node_ptr != NULL) {
HeadNode *tmp;
tmp = (HeadNode *) node_ptr->node.next_col;
mpz_clears(node_ptr->node.row, node_ptr->node.col, node_ptr->node.val, NULL);
free(node_ptr);
node_ptr = tmp;
}
}
free(table);
}
void
table_print(const char *name, Table * const table)
{
FILE *fp = fopen(name, "w+");
if (fp == NULL) {
exit(0);
}
for (HeadNode *current_row = table->head; current_row != NULL; current_row = current_row->next_row) {
for (Node *current_col = ¤t_row->node; current_col != NULL; current_col = current_col->next_col) {
gmp_fprintf(fp, "%Zd", current_col->row);
gmp_fprintf(fp, ",%Zd", current_col->col);
gmp_fprintf(fp, ",%Zd", current_col->val);
fprintf(fp, "\n");
}
}
fclose(fp);
}
size_t
table_size(Table * const table)
{
size_t count = 0;
for (HeadNode *current_row = table->head; current_row != NULL; current_row = current_row->next_row) {
for (Node *current_col = ¤t_row->node; current_col != NULL; current_col = current_col->next_col) {
++count;
}
}
return count;
}
Node *
table_search(Table * const table, const mpz_t row, const mpz_t col)
{
Node *result = NULL;
for (HeadNode *current_row = table->head; current_row != NULL; current_row = current_row->next_row) {
if (mpz_cmp(current_row->node.row, row) == 0) {
for (Node *current_col = ¤t_row->node; current_col != NULL; current_col = current_col->next_col) {
if (mpz_cmp(current_col->col, col) == 0) {
result = current_col;
break;
}
}
}
}
return result;
}
static HeadNode *
create_node(const mpz_t row, const mpz_t col, const mpz_t val)
{
HeadNode *new_node = (HeadNode *) malloc(sizeof(HeadNode));
if (new_node != NULL) {
mpz_init_set(new_node->node.row, row);
mpz_init_set(new_node->node.col, col);
mpz_init_set(new_node->node.val, val);
new_node->node.next_col = NULL;
new_node->next_row = NULL;
}
return new_node;
}
void
table_insert_and_merge(Table * const table, const mpz_t row, const mpz_t col, const mpz_t val, void (*f)(mpz_t, const mpz_t, const mpz_t))
{
HeadNode *previous_row = NULL;
HeadNode *current_row = table->head;
HeadNode *new_node = NULL;
int cmp = 0;
int cmp_is_valid = 0;
while (current_row != NULL) {
cmp = mpz_cmp(current_row->node.row, row);
cmp_is_valid = 1;
if (cmp >= 0) {
break;
}
previous_row = current_row;
current_row = current_row->next_row;
}
// current_row == NULL or the row value associated with current is bigger than or equal to row.
if (current_row == table->head && (cmp > 0 || cmp_is_valid == 0)) {
new_node = create_node(row, col, val);
if (new_node == NULL) goto fail;
table->head = new_node;
++table->number_of_rows;
new_node->next_row = current_row;
} else if (current_row == NULL || cmp > 0) {
// current_row is either at the end of the list or current_row is larger than row
new_node = create_node(row, col, val);
if (new_node == NULL) goto fail;
previous_row->next_row = new_node;
new_node->next_row = current_row;
++table->number_of_rows;
} else {
// rows are equal, so we scan the columns
// current_row != NULL, so we can safely dereference
Node *previous_col = NULL;
Node *current_col = ¤t_row->node;
while (current_col != NULL) {
cmp = mpz_cmp(current_col->col, col);
if (cmp >= 0) {
break;
}
previous_col = current_col;
current_col = current_col->next_col;
}
if (current_col == ¤t_row->node && cmp > 0) {
new_node = create_node(row, col, val);
if (new_node == NULL) goto fail;
if (current_row == table->head) {
table->head = new_node;
} else {
previous_row->next_row = new_node;
}
new_node->next_row = current_row->next_row;
new_node->node.next_col = current_col;
} else if (current_col == NULL || cmp > 0) {
// current_col is either at the end of the list or current_col is larger than col
new_node = create_node(row, col, val);
if (new_node == NULL) goto fail;
previous_col->next_col = (Node *) new_node;
new_node->node.next_col = current_col;
} else {
// (row, col) pairs are equal, so we merge the two nodes, combining their values with f
(*f)(current_col->val, current_col->val, val);
}
}
fail:
return;
}
|
STACK_EDU
|
What is DevJam?
A developer and packager meeting around coordinating and improving the state of packaging of large scale applications written in the java programming language using the GNU Classpath, gcj and other free java-like tool chains for the various GNU/Linux distributions.
NOTE: this page is really for historical purposes...
See the full history at /History
Java/DevJam/2005/Oldenburg: from 23th to 25th September 2005, Oldenburg, Germany
Java/DevJam/2007/Fosdem: from 24th and 25th February 2007 in Brussels, Belgium
Java/DevJam/2008/Fosdem: from 23th and 24th February 2008 in Brussels, Belgium
Java/DevJam/2009/Fosdem: from 7th and 8th February 2009 in Brussels, Belgium
The next DevJam meeting
Three Java/DevJam developer and packager meetings have been organized so far. On this page we will try to coordinate a future meeting on a but a longer timeframe for planning to ensure that those traveling longer distances can adequately make arrangements. There is als a DevJam mailinglist that can be used for coordination.
classpath: how will it evolves further, and how are we going to deal with the big merge with openjdk (who does it, what, when, how, SCA vs. FSF CA, etc)
SUN <-> community: sorting out packaging, branding, certification
Java/DevJam meetings try to attract the main packagers of the various GNU/Linux distributions, JPackage representatives, traditional java build and packaging experts (ant/maven) and hackers from the various projects around GNU Classpath, gcj and the various Free Software groups that are working together to Escape the Java Trap. To make the hackfest as successful as possible we are aiming for a group of 20 to 30 people which will be asked to give a short presentation of their project and packaging efforts. Who are also interested in doing actual hacking during the event to show how the various packaging proposals can/should work out.
- As long only "local" people come, the costs will be almost neglectible. One could expect perhaps 10-50E per local participant who is able to come by car or train (since many wont need any support at all). Participants that have to come from a neighbouring country or state can cost more like 200-400Euro, participants needing to travel to/from Americans from/to Europe need around 700Euro and participants needing to travel from/to Japanese or Australians to/from Europe would be around 1300Euro, if they book their tickets in time. With those estimates the current list of people would perhaps end up in the XXX/YYYEuro/Dollar area.
- The cost of this meeting will depend mostly on the travel costs of the people from abroad. It is estimated to be below XXX EURO/Dollar. The budget will include
- Travel for the involved/contributing people who need support
- Reasonable lodging
- Meeting venue
- Instructions for participants seeking sponsorship
- Add an item before this one
|
OPCFW_CODE
|
A mutation that increases fertility among the predators should quickly spread in such a population
probably isn't true if there are tradeoffs involved. It also wouldn't be true if the increased death rates due to overpopulation, were concentrated more heavily on the over-breeders and their descendants than on the rest of the population. (This could happen due to spatial structure; for example, if each predator hunted and raised their offspring in their own patch of territory.)
Tradeoffs are the bread and butter of Life History Evolution. ("Life History Traits" are the traits which are most directly related to fitness: age-specific mortality rates, age-specific fecundity rates, age of maturity, etc. Size is an honorary life history trait, as is offspring size, since these often have strong effects on mortality and fecundity.)
One of the most famous examples from the early days of Life History Theory is the Lack Clutch. The ornithologist David Lack noticed that birds never laid as many eggs as they were capable of producing, and wondered why. He proposed that there was a tradeoff between the number of offspring, and the survival of each of the offspring. He suggested that clutch size was designed to maximize the number of offspring who would survive to the age of fledging, and that this would be an intermediate optimum.$^1$
Lack was right that the most fit clutch size would be intermediate. In detail, clutches are usually a little smaller than he predicted, because there are additional tradeoffs he didn't consider.
Here's a partial list of tradeoffs that are often thought to be important.
- The more offspring there are in this clutch, the less fit each of them will be. For example, maybe the parent has a certain amount of resources to invest in creating offspring; the more of them there are, the smaller each of them has to be (larger animals are often more fecund).$^2$ If the offspring fitness cost is in terms of survival, then this is Lack's hypothesis.
- The more offspring there are in this clutch, the less likely the parent is to survive to future breeding seasons. (This is often called the Cost of Reproduction.) For example, maybe the parent has some resources which it has to allocate between reproduction, and its own maintenance. The optimum will not favor perfect maintenance (this is the basis of the Disposable Soma theory of aging);$^3$ but it also favors intermediate clutches (which are a little smaller the Lack Clutch).$^4$
- The more offspring there are in this clutch, the less offspring there will be in future clutches. For example, maybe the parent has some resources which it has to allocate between reproduction now and its own continued growth (which will make it more fecund in the future). This tradeoff also has to do with the age at maturity: at what ages should an organism not reproduce at all (devoting all its resources to growth), and at what age should it start?$^5$
(Tradeoffs are often modeled as limiting-resource allocation problems, but this isn't the only sort of tradeoff. A tradeoff occurs whenever getting more of one good thing requires getting less of a different good thing [or more of a bad thing], for whatever reason.)
So much for life history tradeoffs in general. But, in the face of these tradeoffs (and whatever other constraints), which can have different strengths and different shapes etc., different species hit on different strategies. Why? What leads to the large predator life history being optimal for large predators? I don't know. I don't even know if it is known! Life History Evolution is complicated, with many different traits coevolving together. It's hard to model more than a few of these (holding the others constant) at a time. I don't know if anyone's actually explained the conditions that would lead to an entire suite of life history traits, from scratch. (Once upon a time, attempts would have been made to try and explain the large predator life history as one end of the r-selection vs. K-selection spectrum. But that paradigm has fallen out of favour among researchers.$^6$)
Here's another factor, elaborating on the first tradeoff, which I think is probably relevant.
If there are too many predators, then there won't be enough prey, and the average death rate will soon catch up. If this food-shortage cost were applied to every member of the predator population equally, then a mutation for increased fecundity would indeed be favoured: although all members shared the cost (increased mortality), only the mutant would get the benefit (increased fertility); on average the mutant would be fitter and the mutation would spread. (This is ignoring the other costs, such as slower parental growth and faster parental aging.)
But now suppose that you are a predator, and you own a patch of territory; and whether you over- or under-hunt mostly affects your territory. Suppose also that you care for your offspring, or at least let them share the hunting in your territory, rather than sending them elsewhere. Then over-breeding will lead to over-hunting in your territory (or, to hunting the same amount but having lower-quality offspring). But the other territories'll be fine! The brunt of the mortality cost of over-breeding and over-hunting, as well as the fertility benefit, will fall on you and your offspring.
tl;dr: if conditions (eg. due to the population's being spatially structured) are such that the mortality cost of higher fertility applies more strongly to the over-breeders than to the rest of the population, then there will be selection against over-breeding. In addition to costs of overpopulation, higher fertility can also negatively affect the parent's fitness in other ways; for example, by using up resources that they would have used for self-maintenance, or resources that they would have used for growth.
- Lack, "The Significance of Clutch Size". Ibis (1947).
- Smith and Fretwell, "The Optimal Balance Between Offspring Size and Number". American Naturalist (1974).
- Kirkwood and Rose, "Evolution of Senescence: Late Surival Sacrificed
for Reproduction". Philosophical Transactions B (1991).
- Charnov and Krebs, "On Clutch Size and Fitness". Ibis (1974).
- Kozlowski, "Optimal Allocation of Resources to Growth and
Reproduction: Implications for Age and Size at Maturity". Trends in
Ecology and Evolution (1992).
- Reznick, "r- and K-Selection Revisited: The Role of Population Regulation in Life-History Evolution". Ecology (2002).
The classic textbook for Life History Evolution is Stearns' The Evolution of Life Histories (1992). A more recent one (I haven't read it) is Roff's Life History Evolution (2002).
|
OPCFW_CODE
|
Cucumber.io doesn't work properly on VS Code v1.78.2
Does this issue occur when all extensions are disabled?: No
VS Code Version:
OS Version:
Steps to Reproduce:
Update VS code to 1.78.2
My Cucumber extension doesn't work properly when I update VS Code to v1.78.2
Show an error message like this
[Error - 12:47:10 PM] Server initialization failed.
Message: Request initialize failed with message: abort(Assertion failed: bad export type for tree_sitter_php_external_scanner_create: undefined). Build with -s ASSERTIONS=1 for more info.
Code: -32603
[Error - 12:47:10 PM] Cucumber Language Server client: couldn't create connection to server.
Message: Request initialize failed with message: abort(Assertion failed: bad export type for tree_sitter_php_external_scanner_create: undefined). Build with -s ASSERTIONS=1 for more info.
Code: -32603
And my .feature file is like this (the color is black and white), I can't jump to step files, when I click cmd+click on the step feature
same issue as https://github.com/cucumber/vscode/issues/155
Same issue...
Same issue...
same here.. stopped working with the language server crashing..
same issue, is there any update for the fix guys?
uninstall the extension and subscribe to this topic - what else?
To whom it may concern,
The extension is completely broken for me right now. Please provide:
Acknowledgment of the issue.
Status of the analysis and correction effort.
Thanks!
This has already been reported in Issue #155. @xeger has created a patched (hacked) version here: https://github.com/xeger/cucumber-vscode/releases/tag/v1.7.1 that fixes the problem, but it won't be released; you must download it and install it locally if you want to test it.
He considers this to be a hack because the real problem is in the tree-sitter code; it doesn't play well with Electron 22 and the latest version of V8. Once a new tree-sitter is released that fixes the problem we won't need the hack. The tree-sitter problem has been discusses extensively and there is an open PR that appears to fix it:
https://github.com/tree-sitter/node-tree-sitter/issues/126
https://github.com/tree-sitter/node-tree-sitter/pull/127
Thank you for the patch. It fixed the problem, and it connects to the server now. Also, I added the Cucumber (Gherkin) full support. For assertions, I am using TestNG.
Wondering how it shows "Build: Passing" when installing Extension
Any updates on this? We need this for some critical delivery!
Any update on this team? We're depending on this with critical delivery.
@logesh-jarvis @gopi2603 The best we can offer at the moment is the temporary fix that Xeger has provided.
The best place to post questions like this is in the tree-sitter project; a proper fix is entirely dependent on a new release of tree-sitter (or maybe Web Tree-sitter).
This has already been reported in Issue #155. @xeger has created a patched (hacked) version here: https://github.com/xeger/cucumber-vscode/releases/tag/v1.7.1 that fixes the problem, but it won't be released; you must download it and install it locally if you want to test it.
He considers this to be a hack because the real problem is in the tree-sitter code; it doesn't play well with Electron 22 and the latest version of V8. Once a new tree-sitter is released that fixes the problem we won't need the hack. The tree-sitter problem has been discussed extensively and there is an open PR that appears to fix it: tree-sitter/node-tree-sitter#126 tree-sitter/node-tree-sitter#127
both seems closed, what is missing?
@fa-gb
The node-tree-sitter#127 issue was split up into 4 issues/PRs, 2 are already merged and 2 are still open (see: Last statement of #127 on split up).
We (at cucumber/vscode) are blocked until the TreeSitter team can make a new release of node-tree-sitter that works smoothly with Electron 22.
My hack is essentially to disregard missing WASM exports, which could cause crashes or undefined behavior if exports are genuinely missing, as opposed to the exports merely appearing to be missing because of the Electron 22 + TreeSitter + WASM integration bug.
I am very hesitant to release any official cucumber/vscode build that uses the hack, because:
I don't understand the nature of the TreeSitter bug (why do the exports seem to be missing)?
The hack literally monkey patches a TreeSitter distributable in order to ignore the Electron 22 issue.
My apologies on the delay, but the ball is in TreeSitter's court just now. In the meantime, my unofficial/hack build of the VS Code extension should help developers stay productive.
Thank you for keeping us up to date on this.
I am truly sorry for the additional noise this comment might cause for some people. But I really wanted to say thank you for developing this extension in the first place and everybody else for their patience on this.
FTR: I assume @xeger means this: https://github.com/tree-sitter/tree-sitter/issues/2338
Good news everyone: Visual Studio Code has fixed the underlying bug with Electron and WASM, which means our extension will soon work again. I will publish 1.8.0 shortly.
This is fixed in v1.8.0 which should be published to the Visual Studio Code Marketplace soon, or you can install from the .vsix file in the linked release.
Note that you need to be using the Insider Release of Visual Studio Code until they release their September update.
This is fixed in v1.8.0 which should be published to the Visual Studio Code Marketplace soon, or you can install from the .vsix file in the linked release.
Note that you need to be using the Insider Release of Visual Studio Code until they release their September update.
Hi @xeger, when will this version be released to the marketplace? I saw that build failed: https://github.com/cucumber/vscode/actions/runs/6251011331
The Marketplace credentials are expired, so we can't publish the .vsix file to the official distribution channel. I'm working with other contributors to get them replaced; sorry I can't have a specific time frame, but "ASAP" is my goal.
In the meantime, you can visit the release's page in GitHub, download the .vsix file yourself, and install it using "Install from VSIX" in the Code UI:
I celebrate that this issue is resolved and that we expect things to get back to working state. Still, installing release file in VSCode Insiders gives error - Unable to install extension 'cucumberopen.cucumber-official' as it is not compatible with VS Code '1.81.0-insider'. Still, on regular code it does install.
@selfrefactor the underlying bug (which broke the extension) is an incompatibility between Electron and WASM. VS Code builds from 1.78 - 1.81 will never work with any version of this extension. I recommend upgrading to the latest Insider build, 1.83.
|
GITHUB_ARCHIVE
|
[ntp:questions] Can't get Windows/2000 Client to synchronize with my NTP server on CentOS 4.2
meh at NOSPAMwinfirst.com
Sat Jan 14 01:33:52 UTC 2006
I'm running Centos 4.2 a machine which acts as my 'network host', in that it
connects my internal home network to the Internet, provides a firewall (via
iptables), and other basic services, like NTP. I'm running ntp version 4.2.0.a
I was running RedHat 7.3 before this installation, and all was working fine.
I have tested this without the firewall, to make sure the firewall was not
the problem. The firewall should not restrict access to/from the network host
machine and the in-house network.
I have NTP configured to synchronize against three servers, a Statum 2 server
owned by a friend of mine, and two other public NTP servers.
When I run ntpq -p, I can see that my server is properly synchronizing against
these other servers (also, I can see by looking at syslog that things appear to
However, when synchronization software on my Windows/2000 workstation attempts
to synchronize with my NTP server, it doesn't get a response.
I ran tcpdump on the CentOS machine and watched the NTP UDP port, and saw the
request come in, but nothing was sent back. On the Windows/2000 machine, I've
run w32tm -test -v -once, and get back the following (this is just a snippet,
but I think it's the pertinent part):
> BEGIN:NTPTry -- try
> END Line 2479
> Sending to server 48 bytes...
> NTP: didn't receive datagram
> Logging event 0x8000000B. 15 min until this event is allowed again.
> 0x8000000B reported to System Log in Event Viewer
> END Line 1951
> Time source failed to produce usable timestamp.
When w32tm is making its attempts, I see the following messages
in the syslog (when running ntpd with debugging turned on):
> ntpd: input_handler: if=3 fd=7 length 48 from 0a0a9714 10.10.151.20
> ntpd: receive: at 16 10.10.151.1<-10.10.151.20 restrict 184
> ntpd: receive: at 16 10.10.151.1<-10.10.151.20 mode 3 code 2
> ntpd: addto_syslog: select(): nfound=-1, error: Interrupted system call
(10.10.151.1 is my CentOS server machine, running ntpd, and 10.10.151.20
is my Windows/2000 client machine).
In looking at the source for ntpd, I see that one of the bits represented
by the "184" is Authentication Required, so I tried running ntpd with the
-A flag, but this didn't help.
I've tried two additional clients on the Windows/2000 machine, one of
which allows me to select the protocol version to use, and I tried 4, 3 and
2 - with the same result.
I went through some debugging tips found after googling, but these all were
focused on getting the server to synchronize with its peers, and not about
problems connecting to the server from clients.
I'll include, below, my configuration file, and would be grateful if someone
could either spot the problem, or give me a pointer to how I can debug client
server my-friends-ntp-server prefer
restrict my-friends-ntp-server-IP mask 255.255.255.255 nomodify
restrict ntp1-IP mask 255.255.255.255 nomodify
restrict ntp2-IP mask 255.255.255.255 nomodify
# This is my internal network: 10.10.151.0/24
restrict 10.10.151.0 mask 255.255.255.0 nomodify notrust notrap
# The local address are unrestricted:
More information about the questions
|
OPCFW_CODE
|
how to read integers (two per line) into separate arrays from a text file in c++
Apologies for any grammatical errors, english isnt my first language.
my project is to read from a file (whose lines are laid out as "integer"-"integer", eg: 2-6)
the first integers represent the number of times a dice needs to be rolled and the second integer represents the number of possible faces on a dice (from the above example; a 6 sided dice was rolled 2 times). i am struggling to use arrays to take the two integers from every line so i can manipulate them.
any idea?
much thanks!!!
related/dupe: https://stackoverflow.com/questions/33106519/ignoring-commas-in-c-cin
It is almost always better to accompany your problem description with your best attempt at solving the problem. This gives potential answers a baseline from which they can construct answers. Without this baseline, answers have to make assumptions about what you know and don't know or start at the very basics like, "Is the computer plugged in?" No one wants that, so do your best to narrow down what does and does not need to be covered with some code.
Does this do what you want? https://onlinegdb.com/Skn7XNILu
First, construct a struct (or class if you so prefer) to bind your data i.e. the rolls required and the possible faces into one data structure, something like:
struct Die {
int rolls;
int faces;
};
In C++, if you need a dynamic array, prefer to use a std::vector since a lot of the internal memory management (like new/delete) is abstracted away. So, what we need is an array of Dies i.e std::vector<Die>. Now what remains is just reading in the data. First, some error handling
std::ifstream inp("test.txt");
//if error opening file display Error
if(!inp){
std::cout << "Error opening file";
return 1;
}
That gets rid of file-opening errors. Next, we create an empty array of Die elements.
std::vector<Die> arr;
Now, it's simple to read in the elements of the Die, one by one:
Die die;
while(inp>>std::ws>>die.rolls) {
inp.ignore(std::numeric_limits<std::streamsize>::max(), '-');
inp>>std::ws>>die.faces;
arr.push_back(die);
}
The std::ws is just to ignore all whitespaces from the input file line. The inp.ignore() part basically reads and ignores all the characters up to - as specified in the code and then the die.faces is read after having ignored the - character. That's it, that reads in one line of numbers like 2-6. Now that just needs to be repeated till the file has no more data to be read, which is taken care of by the while condition while(inp>>std::ws>>die.rolls).
thank you so much, for this course we have to use namespace std which i think means no using std::xyz format. but ill work around it. thanks again for the help!!!
@MxwlBltzmn If it helps, you can accept the answer :) Also, regarding not using std::, you can read why that is the recommended way for using STL constructs and using namespace std; is generally advised against.
|
STACK_EXCHANGE
|
Is a visa required for Qeshm island in Iran?
I would like to know if the Qeshm freezone area requires a visa for a 14 day visit.
I didn't try to get any type of visa to Qeshm island; and I hold an Indian passport.
I don't know why you keep asking about "immigration from terminal 2 or 3". You said in your previous question you were arriving on terminal 1, and departing from terminal 3.
Sir, i want to go qeshm island on next week. my father and all family is an iranian cityzen. this is the first time i am trying from saudi arbia to travel to qeshm. so what i will do. i want to travel from jeddah air port. to qeshm. there no any direct flight or connection flight. what i will do please help me
Are you Iranian citizen as well? If so, you do not need a visa to go to Qeshm island. No matter what your nationality, you do not need a visa to transit in Dubai Airport since you are not leaving the terminal building.
According to Timatic, the database used by airlines:
Visa exemptions:
Passengers arriving at Kish (KIH) and Qeshm (GSM) islands
for a maximum stay of 14 days.
This does not apply to nationals of Afghanistan, Bangladesh, Canada, Colombia, India, Iraq, Jordan, Nepal,
Pakistan, Somalia, Sri Lanka and USA.
While many sources indicate that the 14-day visa exemptions for Qeshm applies to all nationalities, airlines go by Timatic and would probably deny you boarding without a visa.
Furthermore, according to the travel agency 1stQuest:
The problem is Iran MFA rules are changing all the time, one day they ask for visa in Qeshm and one day they don't. That's why we do not recommend any Indian citizen to travel Qeshm and Kish without visa and guide until they're confident about visa rules
Edit: I trust the answer by @Crazydre more than mine, suggesting you get a visa!
Old answer of mine:
I can find many - different - sources - that - Qeshm - is - visa-free for up to 14 days visits.
All of those sources by themselves look kind of fishy and I would not trust a visa issue on them, but seen together and given that such reputable sources as the airport of Qeshm and the Iranian News Agency are among them, I am almost convinced.
Apparently you have to enter Qeshm directly (i.e. not via Iran mainland), e.g. via flights from Dubai or a soon-to-be-operating ferry from Oman and a deposit is needed according to the first link.
Also check if you need a visa for Dubai!
Most Nationalities are eligible to travel Kish and Qeshm Island without visa for 14 days. Exceptions are nationals of Afghanistan, Bangladesh, Canada, Colombia, India, Iraq, Jordan, Nepal, Pakistan, Somalia, Sri Lanka and USA.
You need to take a direct flight from overseas to these islands, like you can take a direct flight from Dubai to Kish and Qeshm, which means visa-free is available only if you enter these islands from international airports.
While this answer provides useful information (such as need for direct flight), please be sure to add links to relevant sources. Also it is a good idea to disclose your affiliations to any linked sites so that your answer does not get marked as spam. I have edited the answer to add that affiliation (by looking at the site and your account name).
|
STACK_EXCHANGE
|
Recently my team starts wondering if there are better ways to control the workflow in our applications. We would like a better way to determine what “state” the application is in and what the next step it should take.
XState is an implementation of finite state machine. A Finite State Machine (FSM) is a computational model used to represent and manage the behavior of a system. It consists of a finite number of states, transitions between those states, and actions that are triggered during the transitions.
Not to be confused with state in React. A state describes the machine’s status and a state machine can only be in one state at a time.
Transitions and Events
A machine moves from one state to another through transitions which are caused by events. Transitions are deterministic in a way that such transition can only occur between designated states and you will always know what the next state is given the state and transition.
A guard is a condition that the machine checks when it goes through an event. It will only go through if the guard condition returns true and complete the transition. This helps ensure your machine can only operate under certain conditions.
While the machine is running, we can run other side effects called actions. These are pure functions that we can call and forget about them. We could use them for logging or updating data.
While states hold your finite states, context holds your infinite states. These can include your counters, text inputs, data you get from API.
These are async functions that your machine can invoke in any states. You can use it to send API calls that later updates the context and sends an event.
At the end, you might have a state machine like this:
Pros and Cons
- State machine runs in a very deterministic and predictable flow
- Logics that involves the machine are in one place
- Visualizer really helps to see things clearly and lay out how this machine should work
- This is very complicated and makes people wonder if this is needed
- Steep learning curve as there are many new terms to learn especially the concept of finite state machine.
- To beginners, most of the time would be spent on determining what should be a state and what we should store in Context.
- Documentations as of the time of writing is mixing current and alpha version of XState while sometimes the Visualizer doesn’t understand the newer syntax.
- The pro of being explicit comes the cons of being very verbose. This may not be a con from the standpoint of being very clear.
At the time of writing, we don’t have a clear workflow for our application which makes drafting a statechart a bit difficult. Before XState, we using the combination of Zustand and custom hooks for similar use case and for now this isn’t a problem. While XState can definitely make everything deterministic and keep everything together, with its steep learning curve, it’s more like a replacement than a upgrade for our current implementation. The tool is fine, it just probably doesn’t suit us in our current state.
If you find this useful please share it on Twitter and let me know on twitter.
|
OPCFW_CODE
|
What is Hadoop?
Hadoop is one of the open-source frameworks that are responsible for the task of data processing. It also stores the applications of big data which run on the clustered systems. Hadoop is the major part of big data technologies and is basically used to handle the analytics section like data mining, predictive analysis, machine learning, etc. All the types of structured, as well as unstructured data, can be handled by Hadoop and thus the user gets more ease of working with data as compared to the relational data.
Big Data Hadoop in market
In today’s time, Big Data Hadoop can be seen everywhere in the IT industry. Almost all the major brands have already started working in Hadoop, and the future is going to be even better than this. The job market for Hadoop is on fire, and the salary packages of certified Hadoop developers in India are very high. In the next 4 to 5 years, Hadoop will definitely be ruling half of the world, and there will be a massive demand for skilled Hadoop certified employees in the market.
Now, this need not be a question of whether the IT industries need Hadoop strategies, it is obviously a fact that they are. This is the reason why professionals all around the globe are looking to get trained in Hadoop and have the certification done on this hottest IT high-tech skill.
Institutes offering Hadoop training and certification
Many institutes provide Hadoop training in Mumbai, Bangalore, Delhi and Chennai, and other major cities in India. The training can be taken either online or in-class training according to suitability. These institutes have the best course structures and train the individuals in such a way that they can stand out firmly in the Hadoop industry. The skilled trainers have highly experienced IT professionals who have already worked on real-world projects and who know each and every complication and processing that comes with Big Data Hadoop. These institutes also offer Hadoop certification which is very important to get placed in a reputed firm.
Why go for Hadoop certification?
The IT industry today is actually struggling to get skilled Hadoop professionals. The companies want an assurance that the person they are hiring must be capable enough to handle their all form of data. The Hadoop certification is one such proof which actually implies that the candidate is ready to work with all type of data and is a reliable person for the job.
Some significant benefits of Hadoop certification are:
- The professionals who are Hadoop certified to get a better advantage over non-certified people as most of the job postings are for certified people only.
- In terms of the package also, the certified Hadoop professionals get an edge over non-certified people.
- Gives an authentication that you have better hands-on experience in dealing with Hadoop and Big Data.
- The certification gives the person the confidence to deal with all types of work that can come up with Hadoop.
So, it is high time that you should think of starting your profession in Big Data Hadoop and try to take the certification to get a boost up in your career.
|
OPCFW_CODE
|
#!/bin/bash
# This script uses the GitHub Labels REST API
# https://developer.github.com/v3/issues/labels/
# Provide a personal access token that can
# access the source and target repositories.
# This is how you authorize with the GitHub API.
# https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line
GH_TOKEN="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"
# If you use GitHub Enterprise, change this to "https://<your_domain>/api/v3"
GH_DOMAIN="https://api.github.com"
# The source repository whose labels to copy.
SRC_GH_USER="EnvironmentalSystems"
SRC_GH_REPO="ClearWater"
# The target repository to add or update labels.
TGT_GH_USER="EnvironmentalSystems"
TGT_GH_REPO="ProjectManagement"
# repos=('ACTIONS' 'ACT-ACF' 'CE-QUAL-W2' 'Contracts' 'Distribution' 'EcoFutures' 'General-Environmental-Water-Model' 'Geospatial' 'GitHub' 'HEC-HMS-WQ' 'HEC-RAS-WQ' 'HEC-ResSim-WQ' 'HEC-WAT-CE-QUAL-W2' 'Ideas-and-Communication' 'IDF' 'MiddleEastModeling' 'Papers' 'ProjectManagement' 'Proposals' 'Ras2D_to_TecPlot' 'SatelliteTools' 'Satellite-HAB-Research' 'ScreamingPlants' 'Training' 'WQ-Prototypes-and-Scripts')
TGT_GH_REPO='ACTIONS'
TGT_GH_REPO='ACT-ACF'
TGT_GH_REPO='CE-QUAL-W2'
TGT_GH_REPO='Contracts'
TGT_GH_REPO='Distribution'
TGT_GH_REPO='EcoFutures'
TGT_GH_REPO='General-Environmental-Water-Model'
TGT_GH_REPO='Geospatial'
TGT_GH_REPO='GitHub'
TGT_GH_REPO='HEC-HMS-WQ'
TGT_GH_REPO='HEC-RAS-WQ'
TGT_GH_REPO='HEC-ResSim-WQ'
TGT_GH_REPO='HEC-WAT-CE-QUAL-W2'
TGT_GH_REPO='Ideas-and-Communication'
TGT_GH_REPO='IDF'
TGT_GH_REPO='MiddleEastModeling'
TGT_GH_REPO='Papers'
TGT_GH_REPO='ProjectManagement'
TGT_GH_REPO='Proposals'
TGT_GH_REPO='Ras2D_to_TecPlot'
TGT_GH_REPO='SatelliteTools'
TGT_GH_REPO='Satellite-HAB-Research'
# TGT_GH_REPO='ScreamingPlants'
# TGT_GH_REPO='Training'
# TGT_GH_REPO='WQ-Prototypes-and-Scripts'
# ---------------------------------------------------------
# Headers used in curl commands
GH_ACCEPT_HEADER="Accept: application/vnd.github.symmetra-preview+json"
GH_AUTH_HEADER="Authorization: Bearer $GH_TOKEN"
# Bash for-loop over JSON array with jq
# https://starkandwayne.com/blog/bash-for-loop-over-json-array-using-jq/
sourceLabelsJson64=$(curl --silent -H "$GH_ACCEPT_HEADER" -H "$GH_AUTH_HEADER" "${GH_DOMAIN}/repos/${SRC_GH_USER}/${SRC_GH_REPO}/labels?per_page=100" | jq '[ .[] | { "name": .name, "color": .color, "description": .description } ]' | jq -r '.[] | @base64' )
# for each label from source repo,
# invoke github api to create or update
# the label in the target repo
for sourceLabelJson64 in $sourceLabelsJson64; do
# base64 decode the json
sourceLabelJson=$(echo ${sourceLabelJson64} | base64 --decode | jq -r '.')
# for TGT_GH_REPO in $repos; do
# try to create the label
# POST /repos/:owner/:repo/labels { name, color, description }
# https://developer.github.com/v3/issues/labels/#create-a-label
createLabelResponse=$(echo $sourceLabelJson | curl --silent -X POST -d @- -H "$GH_ACCEPT_HEADER" -H "$GH_AUTH_HEADER" "${GH_DOMAIN}/repos/${TGT_GH_USER}/${TGT_GH_REPO}/labels")
# if creation failed then the response doesn't include an id and jq returns 'null'
createdLabelId=$(echo $createLabelResponse | jq -r '.id')
# if label wasn't created maybe it's because it already exists, try to update it
if [ "$createdLabelId" == "null" ]
then
updateLabelResponse=$(echo $sourceLabelJson | curl --silent -X PATCH -d @- -H "$GH_ACCEPT_HEADER" -H "$GH_AUTH_HEADER" ${GH_DOMAIN}/repos/${TGT_GH_USER}/${TGT_GH_REPO}/labels/$(echo $sourceLabelJson | jq -r '.name | @uri'))
echo "Update label response:\n"$updateLabelResponse"\n"
else
echo "Create label response:\n"$createLabelResponse"\n"
fi
# done
done
|
STACK_EDU
|
Copying contents of a MySQL table to a table in another (local) database
I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes:
TransLanguage - non-English languages
TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts)
TransPhrase - individual phrases, in English, for potential translation
TranslatedPhrase - translations of phrases that are submitted by volunteers
ChosenTranslatedPhrase - screened translations of phrases.
The volunteers who do translation are all working on the production site, as they are regular users.
I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database:
CREATE PROCEDURE `SynchroniseTranslations` ()
LANGUAGE SQL
NOT DETERMINISTIC
MODIFIES SQL DATA
SQL SECURITY DEFINER
BEGIN
DELETE FROM `TransLanguage`;
DELETE FROM `TransModule`;
INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`;
INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`;
INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`;
INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`;
END
When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT.
I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?
The answer to this question is extremely simple, disproportionately to the amount of time I spent typing it up. My problem was merely a case-sensitivity issue. That's right, I capitalised in the names of the databases where I should not have. MySQL's error messages, stating that I was denied permission to carry out the commands I had issued rather than informing me that the databases I was trying to access did not exist, misled me as to the nature of the problem. I leave the question up in case it is somehow instructive to someone, somewhere.
|
STACK_EXCHANGE
|
There’s a whole host of software that we use in order to keep Bit Zesty running smoothly. Below is a list of the most important tools we use as a company, and why we think they’re so great.
Slack is a private chat service designed for businesses who need their employees to be able to easily communicate with each other. It’s our central method of communication because employees can leave it running in the background whilst they work. We create a new Slack channel for each one of our projects, making it a great for seeking help and having discussions about work. It looks good, is very easy to start using, and can be accessed online, or through desktop and mobile apps.
Google Drive and Google Docs
Google Drive provides you with 15GB of free cloud storage as standard, which is great (we can sleep sound with the knowledge that all our important work files are in the cloud) but we’re more interested in the collaborative capability it offers when combined with Google Docs. Not only can all our members of staff share files instantaneously with each other, but we can collaborate on documents, sharing notes and edits, and even work on the same document simultaneously from wherever we may be.
Free Agent is a piece of accounting software that allows you to deal with invoices, bills, banking and taxes, but bypassing lots of the complexity of accounting. It has timelines to give you an overview of how your business is performing, and a banking feed so you can keep updated on the state of your accounts. You can use it to send invoices, outline money owed, and calculate various taxes.
Incredibly easy to use, Trello is a sort of digital Kanban board for teams looking to find a good way to collaborate and manage workflows. What this means is that you can create a series of lists filled with cards, attach files, comments, labels and various other media to those cards, and use them across teams to track the progress of your projects. It’s also free for an unlimited number of users and boards, with the option to add on extra features for a fee.
Buffer is one of the most popular social media managing apps around – it is great for queueing up posts to be shared over a pre-set schedule. As we have a Twitter account, Facebook page and LinkedIn page, Buffer is the best way for us to keep them all updated with minimum effort – letting all of the team contribute to the content that we share with our followers.
GitHub is the largest open source code repository in the world, with over ten million repositories currently being hosted on the site. Github stores all of the code we write for our applications in the cloud, and allows us to work in branches, practising revision control so that any changes we make can be done so with confidence. This means our team can work safely at an extremely high level of complexity, and gives us access to a world of open source code.
Bootstrap was created by Mark Otto and Jacob Thornton at Twitter, and is the most popular project available on GitHub. It’s a responsive front-end framework originally designed for use internally at Twitter, but then released onto the web as open source at the end of 2011. Bootstrap contains design templates for forms, typography, buttons, navigation and other interface components, and is an indispensable tool for speeding up our development process. It can be used as-is for admin interfaces, or we can add layers of HTML and CSS to customise its appearance, and it is also great for back-end developers, giving them a skeleton design on which they can easily test their work.
There’s a whole range of To-Do list apps available, but Todoist is one of the best. The basic version is free, and available on a ridiculous number of platforms (including web, Android, iOS etc.) meaning you can stay focused on your day’s tasks no matter where you are. For companies and power users it also offers a premium version, giving you a whole range of features like email and location notifications and task search. Although we don’t officially use it in the company as a whole, many of our staff think it’s a great way to organise themselves.
Photoshop really doesn’t need too much of an introduction. It’s still the best photo editing software available today, and we use it for all our editing needs. We also use it for designing some of our web page mock-ups.
Our UX designers like to use Balsamiq to create the majority of their mockups. It’s great for very early prototyping because it’s fast and simple to use, and you can make improvements and edits without much fuss.
The one thing Slack doesn’t provide is group calls, so we use Skype when our team members need to sit down and have a proper chat and getting to the office isn’t an option. For example, we use it in our Scrum stand-ups (team meetings where we update each other on work completed) on a daily basis, so that those team members working from home are as involved as those working from the office.
VirtualBox is a complicated, open-source piece of virtualisation kit which lets you run various operating systems within a window on your computer. We use it mainly to test our products on Internet Explorer.
We use iOS Simulator to test the responsive designs of all our software.
We love Duolingo! Obviously this isn’t one for use during work hours… But if you’re looking to learn a new language then it’s a good way to begin. You can learn a whole range of them by playing fun games on your phone – especially handy if you want to make something of your otherwise dull morning commute.
Peak is another fun/productive app. It creates daily brain workouts for you, which take the form of mini-games based on skills like memory, language and problem-solving. You can check out your total performance result and then try to do a little better each time. Can be addictive.
|
OPCFW_CODE
|
How to manage automated test cases using TestRail and Autify
Jan 06, 2021
Test automation is the key to modern software evolution. Our industry can no longer rely on manual testing to keep up with stakeholder demands of faster release cycles while omitting human error bugs in production. As QA teams seek better tools to manage their workflow, many are familiar with TestRail for test case management. Testers use the tool predominantly to manage manual test cases. However, there is a way to manage automated test cases using TestRail and Autify’s artificial intelligence-powered no code testing platform. Let’s discuss how…
“In today’s software-driven climate, the best tech companies (Facebook, Amazon, Netflix, Google) are releasing software updates thousands of times a day. QA teams are indeed making the investment in automation infrastructure. Test automation factors a large portion of the modern QA team’s budget. Based on a study, companies with more than 500 employees are 2.5x more likely to spend over 75% of their entire QA budget on test automation” -Source: State of DevOps Testing Statistics
What is TestRail?
TestRail is a web-based test case management tool for QA and development teams. It allows teams to manage, track, and organize their testing efforts. Using the tool, QA teams can track the status of tests, milestones, and projects inside a dashboard. There are real-time analytics insights, activity reports, even boost productivity with personalized to-do lists, filters, and email notifications. TestRail integrates with many other tools including Jira, GitHub, Bugzilla, Ranorex Studio, and more.
Here are some benefits of the test case management tool:
- Centralized test management to collaborate with stakeholders Easily execute tests and track results
- Get reports, metrics, and real-time insights
- Works with Agile and waterfall methodologies
- Integrate with other tools such as bug trackers and automated testing platforms (such as with Autify)
Pricing starts at $34/month/user for the Professional Cloud and $351/year/user for the Professional Server. There is a lower cost per user discount available.
Why is it necessary?
Regarding manual tests, there are many teams that manage tests in Excel spreadsheets. According to a survey, as much as 47% of companies do. When managing tests in spreadsheets it can become very cumbersome very. For example, you have to add columns within the spreadsheet after each test. Furthermore, the file size increases as you add more tests- thus increasing file load times. This is not suitable and unproductive.
It is necessary to use a comprehensive tool like TestRail to manage all of your test cases in one place. Again, it can be quite unproductive to view manual tests in one portal then visit another platform to see all of your automated test cases. With TestRail, you can view both manual tests and automated tests with the integration of Autify.
How it works
As you can see in the diagram above, we have a one-direction connector that syncs data from Autify to TestRail. Therefore, you can synchronize test scenarios and test results from our test automation platform to TestRail’s test management tool.
How to manage automated test cases in TestRail
While you can see all of your manual test cases in TestRail, it is possible to also manage your automated test cases using Autify’s integration. Here is a complete guide here.
Of course, you would need an Autify account. We offer a free 14-day trial, so grab an account today. Then you will need to gain your Personal Access Token from Autify and enable TestRail’s API.
In order to keep up with modern test automation release cycles, the tools cannot stifle progress. Hence why software like TestRail is a necessity for any quality assurance team. It’s great for tracking manual and automated test cases, plus offers other insights and reporting capabilities. It integrates with other DevOps tools, and it has seamless integration with Autify to manage automated tests along with manual cases.
P.S. if you want to keep your team productive, have them try these top testing tools every QA department should have!
|
OPCFW_CODE
|
Bittrex trading bot
Bittrex global is one of the well-known exchanges for digital assets. Bittrex Global aims for both institutional traders as well as individual or novice traders with a seamless experience for investing across cryptocurrencies. The Company is headquartered near the financial centre of Zurich, in the Principality of Liechtenstein.
Bittrex is built on top of a custom trading engine which was designed to provide scalability and to ensure that orders are executed fully and in real-time. Bittrex also supports third-party trading platforms and algorithmic trading via our extensive APIs.
Bittrex provides an automated monitoring platform that allows them to provide its users with fast transaction availability. This includes updates on balance, trade, and holding information.
In regard to fees, Bittrex Global provides its customers with benefits from a fee schedule that offers lower rates as users trade more. Basically the more they trade, the more users will save.
Bittrex provides a comprehensive and powerful API consisting of REST endpoints for transactional operations and a complementary Websocket service providing streaming market and user data updates.
The Bittrex API facilitates call limits on all third-party endpoints to ensure the efficiency and availability of the platform for integrated users. Bittrex’s API users are allowed to make a maximum of 60 API calls per minute. Calls after the limit will not process (fails), with the limit resetting at the start of the next minute of the initial call
Bittrex enables corporate and high-volume accounts to contact customer support for additional information to ensure that they may continue operating at an optimal level.
As mentioned previously most of the order management is done through the REST API at Bittrex. Operation via the REST includes market order, limit order, ceiling order, Good-til-cancel order, immediate or cancel order, fill-or-kill, post-only and conditional orders.
The v3 WebSocket at Bittrex is developed to allow a client to subscribe to a live stream of updates about things that are changing in the system instead of needing to poll the REST API looking for updates. It is designed to complement and be used in conjunction with the v3 REST API. As such the messages sent from the socket include payloads that are formatted to match the corresponding data models from the v3 REST API.
At Empirica, we have integrated our trading bots with Bittrex API, so that our customers can use it out of the box. Let’s name some trading bots that can be applied through API integration on Bittrex:
- Market Making bot: the service of quoting continuous passive trades prices to provide liquidity, and also be able to make some profits throughout this process.
- Arbitrage bot: takes advantage of small differences between markets. It is a trading activity that makes profits by exploiting the price differences of identical or similar financial instruments on different markets.
- Price mirroring bot: this bot uses liquidity and hedging possibilities from other markets to make the markets in a profitable way.
- Triangular Arbitrage bot: using this bot a trader could use the opportunity of exploiting the arbitrage opportunity from three different FX currencies or Cryptocurrencies.
- Basket Orders bot: with this bot, it is possible to execute trades on multiple coins at the same time with the possibility to hedge against other coins.
- VWAP bot: using this bot a trader can achieve the best price with large order by splitting it into multiple smaller ones throughout the trading day.
- Smart Order Routing bot: with this bot, the trader can find the best price for your order on all crypto exchanges and execute it.
In case you would need help from professional software developers to help you build proprietary trading bots and integrate it with the API of Bittrex or other crypto exchanges, you can consult with our quant team.
Have you implemented other trading bots?
We have implemented following bots and algorithms:
|
OPCFW_CODE
|
Problems installing and running Frostwire on Acer One running Linpus Linux
Hi guys, total and utter noob :newbie: to Linux here, I've tried a few different things but to install and run Frostwire but really only have moderate knowledge of Windows.
I've downloaded the latest "stable" version of Frostwire, have done the install as far as I can tell but can't get it to start, either by clicking it directly, or by right clicking the torrent file and selecting Frostwire as the program I want to open it.
I am using an Acer One running Linpus, I have done the basic hack to allow right click on the desktop to bring up a better option menu so programs can be installed, but now that it is on the computer I am struggling to get it to open. I have re-installed it to make sure it is a Linux version (downloaded the option for a Fedora version - I believe I did the right thing as Linpus is Fedora based?).
Please bear in mind I am a total noob with this, and things will have have to be explained fully - I know nothing about "code" and lots of other things mentioned in other threads while trying to find an answer to my problem.
This Linpus system seemed very easy to use until I wanted to do more than just what was pre-installed - will it all be so complicated? Or is there a Windows clone that will be easier - sorry for so many questions but it's left me rather confuzzled!
Not too many of us are familiar with Linpus, which I believe is based on fedora. Most of the people here that have netbooks, have installed different versions of Linux than what came on them. We all have our favorite flavor of Linux don't ya know.
I hate to say it but if you don't find the help you need here (which is not normally the case, support here is excellent most of the time) you might want to try the Acer Aspire One forums. http://www.aspireoneuser.com/forum/
If you weren't so new I would suggest you change the version of Linux on your netbook, but it is still a little tricky getting everything to work since the netbooks are so new, Might be a step or two above your current Linux skill level. Once you have a little more experience with Linux though I encourage you to try some other versions. I loaded Debian Linux (Lenny) onto my Acer Aspire One and I love it, but then that's the Flavor of Linux that I like best.
Best of luck in your search and Welcome to LinuxQuestions.org
|All times are GMT -5. The time now is 07:52 AM.|
|
OPCFW_CODE
|
Copyright Michael Karbo and ELI Aps., Denmark, Europe.
Chapter 21. Advice on RAM
RAM can be a tricky thing to work out. In this chapter I will give a couple of tips to anyone having to choose between the various RAM products.
Of course you want to choose the best and fastest RAM. It’s just not that easy to work out what type of RAM is the fastest in any given situation.
We can start by looking at the theoretical maximum bandwidth for the various systems. This is easy to calculate by multiplying the clock frequency by the bus width. This gives:
Fig. 140. The highest possible bandwidth (peak bandwidth) for the various types of RAM.
However, RAM also has to match the motherboard, chipset and the CPU system bus. You can try experimenting with overclocking, where you intentionally increase the system bus clock frequency. That will mean you need faster RAM than what is normally used in a given motherboard. However, normally, we simply have to stick to the type of RAM currently recommended for the chosen motherboard and CPU.
The type of RAM is one thing; the RAM quality is something else. There are enormous differences in RAM prices, and there are also differences in quality. And since it is important to have a lot of RAM, and it is generally expensive, you have to shop around.
One of the advantages of buying a clone PC (whether you build it yourself or buy it complete) is that you can use standard RAM. The brand name suppliers (like IBM and Compaq) use their own RAM, which can be several times more expensive than the standard product. The reason for this is that the RAM modules have to meet very specific specifications. That means that out a particular production run, only 20% may be “good enough”, and that makes them expensive.
Over the years I have experimented with many types of RAM in many combinations. In my experience, for desktop PC’s (not servers), you can use standard RAM without problems. But follow these precautions:
How much RAM?
RAM has a very big impact on a PC’s capacity. So if you have to choose between the fastest CPU, or more RAM, I would definitely recommend that you go for the RAM. Some will choose the fastest CPU, with the expectation of buying extra RAM later, “when the price falls again”. You can also go that way, but ideally, you should get enough RAM from the beginning. But how much is that?
If you still use Windows 98, then 256 MB is enough. The system can’t normally make use of any more, so more would be a waste. For the much better Windows 2000 operating system, you should ideally have at least 512 MB RAM; it runs fine with this, but of course 1024 MB or more is better. The same goes for Windows XP:
Fig. 141. Recommended amount of PC RAM, which has to be matched to the operating system.
The advantage of having enough RAM is that you avoid swapping. When Windows doesn’t have any more free RAM, it begins to artificially increase the amount of RAM using a swap file. The swap file is stored on the hard disk, and leads to a much slower performance than if there was sufficient RAM in the PC.
Over the years there have been many myths, such as ”Windows 98 can’t use more than 128 MB of RAM”, etc. The issue is RAM addressing.
Below are the three components which each have an upper limit to how much RAM they can address (access):
Windows 95/98 has always been able to access lots of RAM, at least in theory. The fact that the memory management is so poor that it is often meaningless to use more than 256 MB, is something else. Windows NT/2000 and XP can manage gigabytes of RAM, so there are no limits at the moment.
In Windows XP, you have to press Control+Alt+Delete in order to select the Job list. A dialog box will then be displayed with two tabs, Processes and Performance, which provide information on RAM usage:
Under the Processes tab, you can see how much RAM each program is using at the moment. In my case, the image browser, FotoAlbum is using 73 MB, Photoshop, 51 MB, etc., as shown in Fig. 143.
Modern motherboards for desktop use can normally address in the region of 1˝-3 GB RAM, and that is more than adequate for most people. Server motherboards with special chipsets can address much more.Figur 143. This window shows how much RAM each program is using (Windows XP).
Standard motherboards normally have a limited number of RAM sockets. If, for example, there are only three, you cannot use any more than three RAM modules (e.g. 3 x 256 MB or 3 x 512 MB).
CPU’s have also always had an upper limit to how much RAM they can address:
Fig. 144. The width of the CPU’s address bus determines the maximum amount of RAM that can be used.
Let me conclude this examination with a quote. It’s about RAM quantity:”640K ought to be enough for anybody.”
Bill Gates, 1981.
|
OPCFW_CODE
|