Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Get feedback with pull requests
Pull requests combine the review and merge of code into a single collaborative process. Once a feature is added or a bug is fixed, the developer creates a pull request to begin the process of merging the changes into the upstream branch. Other team members are then given the opportunity to review and approve the code before it is accepted. Use pull requests to review works in progress and get early feedback on changes. There's no commitment to merge the changes as the owner can abandon the pull request at any time.
Get code reviewed
The code review done in a pull request isn't just to find obvious bugs; that's what tests are for. A good code review catches less obvious problems that could lead to costly issues later. Code reviews help protect the team from bad merges and broken builds that sap the team's productivity. Reviews catch these problems before the merge, protecting important branches from unwanted changes.
Cross-pollinate expertise and spread problem solving strategies by using a wide range of reviewers in code reviews. Diffusing skills and knowledge makes the team stronger and more resilient.
Give great feedback
High quality reviews start with high quality feedback. The keys to great feedback in a pull request are:
- Have the right people review the pull request
- Make sure that reviewers know what the code does
- Give actionable, constructive feedback
- Reply to comments in a timely manner
When assigning reviewers to a pull request, make sure to select the right set of reviewers. Reviewers should know how the code works, but also include developers working in other areas so they can share their ideas. Provide a clear description of the changes and provide a build of the code that has the fix or feature running in it. Reviewers should make an effort to provide feedback on changes they don't agree with. Identify the issue and give a specific suggestions on what could be done differently. This feedback has clear intent and is easy for the owner of the pull request to understand. The pull request owner should reply to comments, accepting suggestions or explaining why they decline to apply them. Sometimes suggestion are good, but may be outside the scope of the pull request. Take these suggestions and create new work items and feature branches separate from the pull request to make those changes.
Protect branches with policies
There are a few critical branches in a repo that the team relies on always being in good shape, such as
main branch. Teams can require pull requests to make any changes on these branches on platforms like
and Azure DevOps. Developers pushing changes directly to the
protected branches will have their pushes rejected.
Add additional conditions to pull requests to enforce a higher level of code quality in key branches. A clean build of the merged code and approval from multiple reviewers are some extra requirements often employed to protect key branches.
GitHub has extensive documentation on proposing changes to your work with pull requests.
Read more about giving great feedback in code reviews and using pull request templates to provide guidance to your reviewers. Azure DevOps also offers a rich pull request experience that's easy to use and scales as needed.
|
OPCFW_CODE
|
[gdal-dev] Proposed addition of OGR_L_GetName() /
warmerdam at pobox.com
Sat Aug 14 13:57:52 EDT 2010
On Sat, Aug 14, 2010 at 11:18 AM, Even Rouault
<even.rouault at mines-paris.org> wrote:
> Hi gdal-devs,
> Here's a proposal for adding OGR_L_GetName() / OGRLayer::GetName().
> The proposed patch in http://trac.osgeo.org/gdal/ticket/3719 introduces a new
> virtual method GetName?() at the OGRLayer level.
> The semantics of this method is to return the same result as GetLayerDefn()-
>>GetName(). This is indeed the default implementation of the OGRLayer class
> So, what is it usefull for ? A few drivers, like PG since
> http://trac.osgeo.org/gdal/changeset/20277 , can benefit from GetName() by
> overloading it, without needing to fetch the layer definition, which is an
> operation that cost a few SQL requests per layer.
> So the new method can be usefull to present the list of layer names of a PG
> This would also be very handy for a OGR WFS driver I'm writing. It would save
> a DescribeFeatureType request for each layer. Just the GetCapabilities request
> would be needed.
> Apart from the PG driver, there's only another driver (LIBKML) in trunk that
> overrides GetName(), and its semantics matches the one explained above. So
> there's no backward compatibility issue.
> IMHO, this change seems to be a bit light to deserve a whole RFC, but I wanted
> to hear from your feedback before applying it.
I'm generally ok with this change. I would like to see it made explicit that
the name returned by OGRLayer::GetName() must always match the name
returned by the OGRFeatureDefn::GetName().
I am also wondering if there are any other aspects that should be handled
this way. Many applications want to show a list of laye names, and the
geometry type of the layer (ie. ogrinfo, QGIS, etc). Should we also be
making the geometry type a virtual method fetch on the layer?
I can see the change you propose being of benefit to essentially all the
I set the clouds in motion - turn up | Frank Warmerdam, warmerdam at pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush | Geospatial Programmer for Rent
More information about the gdal-dev
|
OPCFW_CODE
|
I'll recommend the test by the "Institut für Testforschung und Testentwicklung" in Leipzig:
You can find alternatives if you search the Internet for "Wortschatztest".
You have to put the verb at the second possition. There are also a pair of grammar mistakes (Thema is neutral and Gesellschaft, feminine):
Meiner Meinung nach ist die Gesellschaft kinderfeindlich.
Meine Meinung über dieses Thema ist, dass die Gesellschaft kinderfeindlich ist.
To my taste, some context is missing: say, which society? in which ...
I want to add something that the other answer didn't focus on.
What can be confusing for a student of German is the dangling nach.
Usually nach is either a preposition followed by a noun-phrase
Ich komme um Viertel nach vier.
Ich gehe nach Hause.
Es riecht nach Fisch.
or an adverb that is part of the verb
Ich gucke etwas im Wörterbuch nach. – nachgucken
Depending on your vocabulary, get an Open dictionary. Free software comes with such dictionaries.
Get the number of entries in the dictionary.
wc -l /usr/share/hunspell/de_DE.dic
Now take 100 words by random from the dictionary, and count how much you know. This schould be a good estimate. For higher acccuracy take ...
TestDaF is what you're looking for.
It's the most appropriate analog of TOEFL since it's accepted by all German universities. And even the test-taking procedure is similar: you will be talking to a computer (and not a human being like in IELTS).
You should prefer the certificates offered by the Goethe-Institute.
The acceptance of a language certificate really depends on the company or university, especially concerning the level. Ask before you apply. But as the Goethe-Insititut is the official institute for language and culture of the Federal Republic of Germany, their certificates have the highest ...
Take the DaF exam. It looks better on your CV.
Since your main motivation is to make your "Lebenslauf" look better - try to take the perspective of a hiring company:
No German employer will care for a language exam/certificate. Just put a line like this in your CV: "Deutsch - fließend in Wort und Schrift". If your other qualifications are ...
Besides TestDaF, three tests I've heard of:
Contrary to TestDaF, which is standard even worldwide, DSH (Deutsche Sprachprüfung für den Hochschulzugang) varies from university to university. And the name explains what it is.
The certificates of Goethe-Institut: Universities ask for a C1 or C2 level.
An option, which I don't know if is valid in Germany, is ...
I guess you are looking for:
Wortschatz, Grundwortschatz or Basiswortschatz.
These terms are used in the context of educating German at school, usually at primary school.
Here is an example from Brandenburg and Berlin with the Grundwortschatz for grades 1 and 2 as well as 3 and 4.
This document lists the Mindestwortschatz for the primary school in ...
Considering that DSH stands for Deutsche Sprachprüfung für den Hochschulzugang it makes some sense that they'll only test persons showing an interest in studying at their university... The DaF exam and quite a few others are considered equivalents, by the way.
Keeping all that in mind, some universities allow non-students to sit the exam as well. This is ...
The relevant levels of language ability according to the Common European Framework of Reference are C2, C1, and B2, with C2 being the hardest, and B2 the easiest.
So "Goethe-Zertifikat C 2 (Goethe)" is the hardest.
"DSH-3" is the upper end of C1, and is next in line. "TELC Deutsch C1 Hochschule" is equivalent to this.
"DSD II" puts you at the breakpoint ...
As can be read here, DSH2 means a fluency level of "C1", which means being able to read long complex texts, recognizing implicitly given meanings, speak fluently, etc. (There is only one better level, C2, roughly approaching a native speaker.) If you mistook 2 for being a grade, you have probably still some effort to invest, to reach DSH2.
I doubt that even ...
Note: This might not address the question as stated in the title, but it may help with the underlying problem, which I understand to be “What is an adequate level of German to teach at a German university?”
I have worked in Academia for two years, with the working language being German, and am currently involved in recruiting software developers for a ...
I think there are no formal requirements as a Goethe Certificate.
The following is a typical example. Quote:
Gute Sprachkenntnisse in Deutsch und Englisch werden vorausgesetzt.
Perhaps you should contact a Goethe-Institut, I am sure they have adequate information. You may also have a look at this website where you can find hints concerning German ...
While I understand, that this is not what you ask for, others coming across the question may be interested:
I never heard something of something corresponding to your 12 levels. The closest I'm aware of are the word lists of the language proficiency levels, one example is B1-Liste des Goethe-Instituts.
For books intended for children there are publisher ...
It depends on the instituition you take the test at. Goethe-Institut Ungarn, for example says three weeks after the written exam until you know your grades and 30 days for getting the printed exam result, Carl Duisberg says one month but you can pay additional 150€ at the registration to get your printed exam result two weeks after the written exam.
|
OPCFW_CODE
|
ChatOps (video) has been a major influence on our team to adopt a DevOps approach. If you’re not familiar with the concept, it is a term coined by GitHub to describe their approach of “putting tools in the middle of the conversation”. In a nutshell, it means using the team’s chat as a multi-user terminal.
Want to deploy your app? Just send a message and a bot will take care of running things for you. Got a new commit in? The bot will let you know if your changes broke the build. Add to this the social dimension of using it as a bonding tool (through the countless gif and meme scripts available), and you have the perfect tool for introducing some of the fundamentals of DevOps. Anybody on the team is made aware of what’s happening, and potentially encouraged to contribute, may this be code or infrastructure related.
We love Hubot
Hubot, GitHub’s very own chat bot, features hundreds of integrations. Pretty much any popular service under the sun has a script for it, and will run with most chat services (Campfire, Flowdock, HipChat, IRC, XMPP, Hall, Slack, QQ, … even AIM). The barriers to entries to add your own custom scripts for Hubot are really low; whip out a few lines of Javscript, restart your bot and done.
You can have a lot of fun with it; we can get an air quality report coming out of a sensor outside of our office, and are working on hooking up an Arduino microcontroller to let us open our door to visitors right from our chat. We also occasionally get work done, getting build status, commit notifications and deploying apps (in between copious amounts of animated gifs).
While Hubot is pretty neat, we thought that we could make things even simpler for others to get started with it:
A bot is lightweight enough that it doesn’t necessarily require its own instance, but is something you want to keep moderately secure and stable as it sits at the center of a lot of your automation/ops.
It can be slightly austere to configure. Documentation is more often than not fairly dry.
Juggling with multiple bots can make things increasingly painful.
This is why we built ChatO.ps, a simple service that lets you create and customize (Hubot) bots. You can easily browse available scripts and set things up through a straightforward interface. We actually even support custom scripts (which we use ourselves for custom integrations like the air quality sensor).
It’s still very much beta, but we thought we’d put it out there and see what people think. We have a whole backlog of features we’d like to add, first of which would be the ability to add custom scripts directly through our UI (for now users must make their custom scripts available to the bot in a Git repository). We’re also cleaning up the documentation of various scripts, fixing some of the older ones and thinking of ways to make the overall experience even simpler.
|
OPCFW_CODE
|
07-07-2015 05:22 PM
When we use this code to delete a folder :
call system(" rm -rf DIR1/DIR2/&folder1");
If sas does not find the macro variable &folder1, it will delete the directory : DIR1, why ?
Is there any solution to avoid the problem.
07-07-2015 05:27 PM
SAS doesn't do anything but pass a command to the system, you need to ensure the command is what you want.
If &folder1 doesn't exist it becomes:
rm -rf DIR1/DIR2/
-r, -R, --recursive remove the contents of directories recursively (the force option must often be used to successfully run rm recursively)
-f, --force ignore nonexistent files, never prompt
Find the correct unix command that generates the result you want.
07-07-2015 07:03 PM
no, but i want to know , if there is "an option or idea " to avoid the option -rf , if it can not find the macro : variable : &folder1.
Now, if it can not find the &folder, it will delete : DIR2; it is really a problem!!!
07-08-2015 01:51 AM
The Unix "rm -rf ..." will delete all what is in that path. The way of your coding is resulting in possible unwanted sideeffects deleting the wrong things.
The only protections you have for unintended sideeffects are:
- OS security
- Defensive in coding.
Defensive coding with delering in your example.
Position the current dir in a location you do not want to delete but only some members, Than delete those.
As you have a position current directory being set you are working relative to that one. You could be more trustworthy by an absolute path.
For better traceability I would use piping instead of just system calls. The "2>1" will let show up possible error messages in your sas log with all common output of all commands. The "cd " command should work fine when not de deletion of only &folder is attempted I a wrong location. When folder is wrong it will try to delete just something in you current dir. Imagine the rootfolder is your current dir and you will delete all below that (rm -rf *). All files you have access to will be deleted. It are Unix commands that will cause that not SAS.
filename system "cd DIR1/DIR2 ; rm -rf &folder1 2>1" ;
data _null_ ;
infile system ; input ; put _infile_ ;
07-08-2015 03:52 PM
Thank you for your message.
Imagine that, we can forget to set this check if "&folder1 " ne " " then
is not an option for user or administraror to stop the run of "rm" in the synthax :call system("rm -rf DIR1/DIR2/&folder1")
07-09-2015 05:34 PM
No, I need usually a solution.
As I said, I want sas to stop the running of "rm", if I have a macro variable null, even if I have not written the condition if "&folder1 " ne " "
07-10-2015 11:01 AM
What you want sounds impossible.
Could you use operating system commands to change the permissions on the DIR1 and/or DIR2 folders to make it impossible for the rm command to remove them?
07-10-2015 11:32 AM
Tom just coding that command a little bit different (first a cd) already achieved that at the coding level.
Setting OS controls (chmod) for restricted access is another sensible approach. As is promoted by SAS everything to run by sassrv (and sometimes) also installed by that there is een OS security challenge to be solved. Having the x-cmd discussions is a result of this bas SAS approach. Would the process run in a security setting DIR1 and DIR2 never could hurt as of your proposal is also the proposal of correcting that sas default.
|
OPCFW_CODE
|
Comment on page
DSI studio is a tractography software tool that maps brain connections using diffusion weighted images.
Whole Brain Protocol for Tractography with Diffusion MRI
Select “Step 1: Open Source Image” to load diffusion MR images (DICOM, NIFTI, Bruker 2dseq, Varian fdf) in order to create a .src file. The .src will be created and located in the main window
Select "Step2: Reconstruction"
A new window will appear. Confirm the appearance of the mask and select the reconstruction method to be QBI. The reconstructed image will appear in the main window but it will have a filename ending in ".src.gz".
Select "Step 3: Fiber Tracking" and open the subject.fib file.
The following screens will appear:
Select "color" under the Slice dropdown menu:
Load the following "Tracking Parameters into dsi studio
The tracking parameters settings should be:
termination index =nqa
Threshold = 0.1
Angular Threshold = 0
Stepsize(mm) = 0.0
Smoothing = 1.0
Min length (mm) =30.0
Max length (mm) = 300.0
Seed orientation = primary
Seed position = subvoxel
Randomize Seeding = off
Check ending = off
Direction Interpolation= trilinear
Tracking algorithm = streamline(euler)
Terminate if= 100,000 tracts
Thread Count =2
Output format= trk.gz
For each brain slice create a new region. In general you should have about 1-2 seed regions per tract and 0-2 ROI regions per tract. Under "type" select "seed" for seed regions or "ROI" for ROI regions.
REVIEW this diagram to acquaint yourself with the toolbar.
RECONSTRUCTING the corticospinal tract (cst):
Move the axial slide bar until you reach a slice that looks similar to one of the boxes.Then using the freeform option, make a circle around the region that the arrow is pointing to. Follow these steps for the remaining three boxes. Be sure to make a new region with each slice/box.
Make sure all four ROI regions are checked and then select "Fiber Tracking"
If the resulting tract does not look like the track in #4 box above, create exclusion regions to remove erroneous streamline fibers. To do this select "ROA". Then using the square or the freeform option enclose the area where the erroneous fiber is located.
All exclusion regions should be created using the same ROA region. For example in the image below three exclusion regions (two on a coronal slice and one on an axial slice were drawn but only one ROA region was created.
FINAL STEP: save all of the files: tract image, regions and density image.make sure the file names are unique. Example:
Save the regions by selection "Save all regions as multiple files" then open when the appropriate folder location is created
Save the tract image then select the save button
Save the density map by selecting "tract density image" then "diffusion space", select "no" in response to whether to export directional color, then select "yes" in response to selecting the whole tract.
NOTE: There are many other white matter fiber tracts of interest. I can provide details on how to reconstruct other tracts if you send a request.
|
OPCFW_CODE
|
Like the prior variants of Microsoft Office, the 2016 Microsoft Office is a bundle of programming programs intended for different office work exercises. The projects Exceed expectations, Word, Access, Viewpoint, OneNote, Visio, PowerPoint, InfoPath, FrontPage, Undertaking, Distributer, and Live Meeting are incorporated into the 2016 Microsoft Office. The 2016 adaptation of Microsoft Office joins them all together under a kind of mass rate sticker price however each of these projects is likewise sold as a different programming item. This form of Microsoft Office gloats a more entire bundle than any of the past adaptations. It offers more projects, and the most a la mode forms of the product accessible. Obviously, the more established renditions of the Office bundle are still flawlessly great. Truth be told, despite everything I’m utilizing the 2000 expansion, and am impeccably content with it.
You can be effectively sucked into the buildup over the 2016 Office bargain just to find that you did not generally require the ‘upgrades’. Like all product organizations, Microsoft is refresh insane. Microsoft more up to date forms speak to a whole new product offering for a negligible cost to them since they are fundamentally an indistinguishable thing from the more seasoned adaptation. The Microsoft Office idea is an enormous accomplishment for the product monster to buy cheap Microsoft office 2016 key. Pretty much every PC that significant organizations claim has a duplicate of 2016 Microsoft Office or some prior rendition. You can do pretty much anything including composing structure letters, following information, making visual introductions, sending messages, making graphical plans, joining content and graphical pictures, and organizing on the web gatherings through 2016 Microsoft Office programs.
I utilize Exceed expectations, Word, and PowerPoint more often than not. Any individual who does any sort of work including introductions, composing, and arranging information will depend vigorously on these three projects. These three projects alone make the 2016 Microsoft Office bundle a decent arrangement. Exceed expectations and Word are crucial projects for individual utilize, regardless of the possibility that you do not work with PCs since they make letter composing and following your accounts significantly less demanding. I profoundly suggest grabbing 2016 Microsoft Office on the off chance that you do not have any Microsoft Office forms, and do not have Word, or Exceed expectations. It is accessible online from pretty much anybody, and any stores that convey PC programming will have it too. You presumably do not generally need to get 2016 Microsoft Office in the event that you as of now have a more seasoned Office form. You might need to get it in the event that you do a great deal of office work and are occupied with the new projects that accompany the bundle.
|
OPCFW_CODE
|
Our professional assignment writers have mastered the art of writing programming assignments and providing accurate solutions. They follow the following steps while writing your programs;. If you want to improve on your grades and find programming interesting then we urge you to sign up with us.
We will help you carry all the bulky burdens of programming assignment. We know that students should live their lives the way they want.
There is no harm in taking a little break and relaxing with your friends after a tiresome day in school. We will do your assignment and give you time to focus on your interests and other useful activities like getting a part-time job, playing your favorite sport and having time for your family. We are dedicated in ensuring you achieve all your dreams and make your parents proud. We guarantee that you will score top grades in all your programming classes and impress your professor.
Our clients have taken this opportunity and have noticed a big change in their academic life. Contact our customer support executives and get the following benefits;. We welcome you to sign up for our services instantly and get the most student-friendly programming language assignment help.
Saturday, September 15, C has the following inbuilt functions; C- Arithmetic Functions- These are inbuilt functions that are used to perform mathematical operations. Examples of the arithmetic functions include; abs ,floor , round , ceil , cos etc. C- INT, CHAR validation functions- These functions are used to validate the data type of a given variable and to convert upper case to lower case and vice versa. Examples include isalpha , isdigit , isalnum , islower ,isupper etc.
C-buffer manipulation function- These functions in C programming work on the address of the memory block rather than the values inside the address.
Examples include memset , memcpy , memmove etc. C-Time related functions- These functions interacts with the system routine time and displays the formatted time outputs. Examples of time related functions include setdate , getdate , clock ,time , difftime etc.
C-Dynamic memory allocation- It allocates memory during program execution. C offers the following four memory allocation functions; malloc , calloc , realloc and free. C-type casting functions- these functions are used to modify data types from one form to another. The new data type should be mentioned before the variable name.
C- Miscellaneous functions- These are C environment functions. Examples are getenv , setenv, putenv, perror etc Most universities across the world have made it compulsory for students to learn the C language. They will help you with your assignment related to all the areas in C programming such as; Writing, compiling and debugging programs Pre-processor macros Returning from functions Linked Lists and trees Multidimensional arrays and pointers Function pointers and hash table Our C programming assignment help platform is exceptional and has been assisting thousands of students in countries like the USA, UK, Australia, Canada, UAE, etc.
All the students who seek our help are always satisfied with our work because; We keep in mind that all assignments need to be unique and not plagiarized. We have plagiarism checker software that shows any traces of duplicity.
The final content delivered to the student is very original and not copy-pasted Our content is of high quality, accurate and detailed. Our professional assignment writers conduct in depth research and spend time proofreading and correcting all the mistakes. We will only send to the student a document that is worthy of earning top grade Our payment methods are very convenient. Our rates are also pocket-friendly and affordable. We are dedicated to ensuring that our customers are highly satisfied.
That is why we deliver the content way before the deadline date so that the client gets time to go through the assignment and have all issues addressed in time.
C programming has been adopted in system development language and is used in: Operating systems Text editors Utilities Language interpreters Network drivers Modern programs assemblers A C program consists of five basic parts; Pre-processor commands Functions Variables Comments Expressions and statements When writing a C program, a programmer uses the following syntax Tokens- a token can be a constant, a keyword a symbol, an identifier or a string Semicolons — it is used to terminate a statement in a C program Comments — these are texts which are ignored by a compiler.
Keywords — C programming uses keywords such as auto, long, break, register etc. Whitespace — it is a blank line that is fully ignored by the compiler and is used to describe the tabs, comments, newline characters and blanks in C.
C is considered as the most widely used programming language because of the following advantages it has; C language allows the use of various data types as well as powerful operators It is highly portable and can run in any other computer It is used as a foundation for many other programming languages It is very suitable for beginners because the programs are easy to understand and are efficient Despite all the pros, the C language also has the following shortcomings; The C language does not support the object oriented programming concept.
It does not also have the destructor and constructor concepts It does not have a strict type checking available If you are struggling with your assignment in C programming then you should take advantage of our exceptional C programming homework help service.
We assure our clients the following benefits; Testing- the programs we write for students are first tested for errors and corrected before they are delivered. Our rates are cheaper than our competitors and can easily fit in the budget of the student We do not believe in just giving the correct answer to students, our experts ensure that the content they give are self-explanatory. In addition, the outputs were assembled with the help of regular compiler. C Programming language Assignment Help.
It has numerous features, although if the users do not want to use particular feature then the run-time cost of that feature will not be added with the other usable features. Another extension of C language is C of C sharp. It was introduced by the Microsoft Company. The basic purpose of Microsoft Company behind the release of C is to create the alternative for Java programming language.
Particularly, this has happened due to the lawsuit was filed against the Java implementation of Microsoft Company by the Sun Microsystems. The reason is that Sun Microsystems was discovered the Java programming language. In addition, these outputs are used in another program. Additionally, struct is the complex data type declaration in the C language. It permits the users to control the associated models with the help of some specialized codes.
Now, if the user wants to work on the complex data types, they should combine the operations with the regular data types. In addition, the same thing happen whatever the user need. However, the template function permits the users to write code which are able to handle any kind of data.
The purpose of this function is that the users are able to control the errors or mistakes on the consistent basis. The reason is that most of the functions of both these languages are similar to each other. In addition, the users are also allowed to use several types of compilers in these languages. Our experts are able to provide their professional guidance for the students of all education levels. In addition, the orders should contain instructions as well as submission deadlines so that our experts are able to complete their tasks as per their needs.
Most of the programming languages are generally emphasized on the type-rich, design and the concept of the trivial. It gives the instructions to the preprocessor include iostream along with the standard file. Therefore, in order to declare the phrases or expressions, the programmers use an important symbol such as point.
Point is a symbol which denotes that where the execution is finished. This sign is used to perform functions with the help of Boolean and along with that one operand is also positioned at the right side. In addition, they also offer solutions for problems and other different material for the students of all education levels.
Our experts are available to facilitate the students of different academic levels such as high school, college; graduate, post-graduate and PhD. Particularly, we offer our services to the students of different universities and colleges.
In addition to exceptional and standard quality of C assignment, we ensure that we revise and do necessary amendments to the C assignment if any customer is not fully satisfied with the earlier written C assignment. Whenever you want to do a C assignment, consult our C online help service which is always ready and committed to providing high quality assistance in terms of C project help.
Feel free and order your C project online from us. The Best Programming Language: Python on the Rise. Keeping up… Read more…. Modeling Projectile Motion Using Python. Any system or process can be described by some mathematical equations. Does security service of a… Read more…. You can forget to allocate or… Read more…. Our experts will gladly share their knowledge and help you with programming homework.
ProgrammingAssignmentExperts Offering programming help,computer science help,programming assignment help,java,visual basic help,computer programming homework help,assistance to resolve problems online with our expert programmers and Get programming help.
Need any help for your Programming homework? Catch this website to get facility of the perfect services of C and C++ Programming assignment help.
Best C Homework Assistance. C project is a complicated language of programming, and to write C project by your own is not easily. We propose c project help at Assignment Expert. Are you searching for c Programming assignment help? agounimezain.tk is the best place for you,because we have + expert programmers who can help you with any type of programming /5(14K).
May 31, · Our C++ programming homework help are designed in a way so that you can expect assistance at any point of time. Our experts will give you solutions and will help to resolve queries. Through our assignment help you can get help 24x7 which ensures complete and assured connectivity with students all the time/5(). Programming Assignment Help is the online programming help service provided to the students in UK, Australia and US. Take help with Java,C, C++, C#, PHP etc.
|
OPCFW_CODE
|
The real world is uncertain. Thats a given. Our networks, at their most fundamental, carry the real world from one point to another and therefore by definition carry that uncertainty during every moment they operate. Any autonomic system which seeks to properly manage our networks faces this challenge of pervasive uncertainty. They will always be constructed around that dichotomy of bringing order to chaos by applying their adaptive techniques to create order from chaos. If we map too much of that adaptation into the systems, they become cumbersome and unwieldy. We therefore need to smooth the chaos curve in order to drive autonomic systems design in a direction that will maintain their efficacy. How might we do this? Read on for our thoughts.
We are currently engaged in a conflict with the increasingly complex systems we seek to create which we are losing. Things may have become easier for the end user(arguably), but these systems which provide the end user more simplicity mask a corresponding increase in the the complexity of the underlying systems which support them. This affects the economics of viability of new developments in the marketplace and actually makes some of them non-viable. This situation forces us into choices that we cannot make on an informed basis and our decisions may end up fossilising parts of the network so that future development becomes uneconomic or infeasible.
In principle, autonomic network systems are founded on the principle that we need to reduce the variability that passes from the environment to the network and its applications. In latter years, many companies including Rustyice Solutions have brought products to the market that simplify the management of networks by offering levels of abstraction which make configuration easier and allow the network to heal itself on occasion. These products tend to smooth the chaos curve and increase the reliability of the systems without the involvement of a low level re-inspection of the systems themselves. They do this by integrating the available information from different semantic levels and leveraging it to give the systems a more holistic view from which to consider the operational status of themselves.
Lets consider what we expect of an autonomic system. It can be defined in terms of a simple feedback loop which comprises information gathering, analysis, decision making and taking action. If the system is working properly then this feedback loop will achieve and maintain balance. Examining these elements one by one, information gathering can come from network management platforms which talk to the discrete nework components on many levels as well as environmental and application based alerts. Analysis can mean such activities as applying rules and policies to the gathered information. Decision making is the application of the analysis against the rules and policies to determine whether or not they meet the conditions set out in the policies and taking action could involve adjusting network loads on managed elements and potentially informing humans who need to take some form of action. These are the fundamental terms with which we seek to understand any requirement from any of our own customers.
This sounds fine in theory but what do we need to understand in order to make it work? The network is currently modelled on a layer based concept where each of the layers has a distinct job to do and talks only to its neighbour layers as well as its corresponding layer at the distant end of the communications link. This model has served us well and brings many distinct advantages including hardware and software compatibility and interoperability, international compatibility, inter layer independence and error containment. It does however carry some disadvantages with it too and most significant of those in terms of this discussion is that of the lack of awareness at any point in the system of the metadata which is why we have the networked systems in the first place. The question of whether the network is doing what it is needed to do at the holistic level is something which no discrete layer ever asks, nor should it. It almost comes down to a division between the analogue concerns of the real world versus the digital, yes/no, abilities of the systems themselves.
Taking this discussion a step further we need to improve our ability to ascribe the real world requirements which are the reasons these networks exist and why we build them to the systems which we intend should be capable of making decisions about whether the systems are working or not. Can these systems really know whether or not the loss of a certain percentage of the packets in a data-stream originating on the netflix servers will impact the enjoyability of somebody watching the on demand movie they have paid for. From a higher perspective, the question becomes whether we can really design autonomic decision making systems that could understand the criteria the real world applies to the efficacy of the network and base their decisions on that finding. They also need to be aware of the impact any decisions they make will have on the efficacy of any other concurrent real world requirements.
There are many mathematical abstractions which seek to model this scenario in order to predict and design the autonomic behaviours we require of our systems and you will be relieved to read that we do not propose to go into those here. In principle however we need to move towards a universal theory of autonomic behaviour. We need to find an analytic framework that facilitates a conceptual decision making model relating to what we actually want from the network. We need to couple this with an open decision making mechanism along the lines of UML in order for us to fold in the benefits of new techniques as they develop and ultimately we need to be able to build these ideas directly into programming languages such that they better reflect the real world systems we want on a higher level of abstraction.
In conclusion, we can say that autonomics is a multi level subject and we need to take account of these different semantic levels. We need to build an assumed level of uncertainty into our programming in order to maximise our ability to engineer autonomic systems and we need to develop standards in order to further enable the capability of our systems in this area. These are the fundamental points which we at Rustyice Solutions begin any discussion with respect to network management and more especially autonomic networking such as WAN acceleration. If you or your business are interested in examining this topic in more detail with a view to enhancing the value which your network brings to the table why not give us a call. We look forward to hearing from you.
|
OPCFW_CODE
|
Does a good use case exist for skip() on parallel streams?
EDITED on September, 2015
When I initially asked this question on February, 2015, the behaviour reported in the linked question was counter-intuitive, though kind of allowed by the specification (despite some little inconsistencies in the docs).
However, Tagir Valeev asked a new question on June, 2015, where I think he clearly demonstrated that the behaviour reported in this question was actually a bug. Brain Goetz answered his question, and admitted that it was a bug to not stop the back-propagation of the UNORDERED characteristic of the Stream on skip(), when triggered by a terminal operation that wasn't forced to respect the encounter order of the elements (such as forEach()). Furthermore, in the comments of his own answer, he shared a link to the posted issue in JDK's bug tracking system.
The status of the issue is now RESOLVED, and its fix version is 9, meaning that the fix will be available in JDK9. However, it has also been backported to JDK8 update 60, build 22.
So from JDK8u60-b22 onwards, this question doesn't make sense anymore, since now skip() behave according to intuition, even on parallel streams.
My original question follows...
Recently I had a discussion with some colleagues about this. I say it's quite useless to use skip() on parallel streams, since there doesn't seem to be a good use case for it. They tell me about performance gaining, FJ pool processing, number of cores available to the jvm, etc, however they couldn't give me any practical example of its usage.
Does a good use case exist for skip() on parallel streams?
See this question here on SO. Please read the question and answers, as well as the comments, as there are tons of good arguments there.
I think the issue really is less about parallel vs sequential as it is about ordered vs unordered streams. If a stream is ordered, skip and limit make sense regardless of sequential or parallel processing. If the stream is unordered, it seems unlikely that skip and limit are meaningful.
The choice of sequential vs parallel is simply one of execution strategy. The option for parallelism exists so that, if the specifics of the problem (problem size, choice of stream operations, computational work per element, available processors, memory bandwidth, etc) permit, then a performance benefit may be gained by going parallel. Not all combinations of these specifics will admit a performance benefit (and some may even garner a penalty), so we leave it to the user to separately specify the operations from the execution strategy.
For operations like skip() or limit(), which are intrinsically tied to encounter order, it is indeed hard to extract a lot of parallelism, but it is possible; this generally occurs when the computational work per element (often called 'Q') is very high.
Such cases are probably rare (which might be your point); this doesn't make the combination of operation and execution mode "useless", simply of limited usefulness. But one doesn't design a API with multiple dimensions (operations, execution modes) based on the combinations that one can imagine is useful; assuming each combination has a sensible semantics (which it does in this case), it is best to allow all operations in all modes and let the users decide which is useful for them.
Not related to this question in particular, but could you please have a look here?
@fge It's not just related, your question was actually the trigger for the discussion with my colleagues, and then I decided to post this question. In fact, I gave an answer to your question ;)
|
STACK_EXCHANGE
|
Data Quality from First Principles
by Cedric Chin
If you’ve spent any amount of time in business intelligence, you would know that data quality is a perennial challenge. It never really goes away.
For instance, how many times have you been in a meeting, and find that someone has to vouch for the numbers being presented?
“These reports show that we’re falling behind competitor X,” someone might say, and then gets interrupted —
“How do you know these numbers are right?”
“Well, I got them from Dave in the data team.”
“And you trust him?”
“We went through the numbers together. I can vouch for these numbers.”
“Alright, carry on.”
The conversation in itself tells you something about the relationship the organization has with data. It tells you that perhaps data quality problems have plagued management in recent months.
Nobody wants to make decisions with bad data. And nobody wants to look stupid. So for as long as people use data in the workplace, trusting the quality of your data will always be a background concern.
Which in turn means that it’s going to be your concern.
The Right Way to Think About Data Quality
There’s this old nut in business circles that goes: “people first, process second, and tools third”, sometimes known as “People Process Tools”, or ‘PPT’. One take on that saying is that if you have a group of well-trained, high quality people, you don’t need to have rigid processes. If you can’t expect people to be great 100% of the time, then you have to introduce some amount of process. And if you’ve tried your best with people and process, then you have to think about augmenting both with tools or technology.
(The other way of looking at this is that if you have terrible tools, work still gets done; if you have terrible processes, good people can still figure out a way around the bureaucracy, but if you have terrible people, all hope is lost.)
Data quality is a PPT problem. This is actually the consensus view in the industry, and should not be surprising to most of us. Here’s Snowplow, for instance, in their whitepaper on data quality:
We won’t lie - measuring the accuracy and completeness of your data is not easy and is a process rather than a project. It starts at the very beginning: how you collect your data. And while it never ends, quality grows with better-defined events going into your pipeline, data validation, surfacing quality issues and testing.
And here’s data warehousing legend Ralph Kimball, in a white paper he published back in 2007:
It is tempting to blame the original source of data for any and all errors that appear downstream. If only the data entry clerk were more careful, and REALLY cared! We are only slightly more forgiving of typing-challenged salespeople who enter customer and product information into their order forms. Perhaps we can fix data quality problems by imposing better constraints on the data entry user interfaces. This approach provides a hint of how to think about fixing data quality, but we must take a much larger view before pouncing on technical solutions (emphasis added).
Michael Hammer, in his revolutionary book Reengineering the Corporation (HarperBusiness 1994), struck to the heart of the data quality problem with a brilliant insight that I have carried with me throughout my career. Paraphrasing Hammer: “Seemingly small data quality issues are, in reality, important indications of broken business processes.” Not only does this insight correctly focus our attention on the source of data quality problems, but it shows us the way to the solution.
(…) Technical attempts to address data quality will not function unless they are part of an overall quality culture that must come from the very top of an organization (emphasis added).
But common sense dictates that while tools can only succeed with the right people culture and processes in place, tools will influence what can be done, and by whom.
Kimball’s whitepaper is useful because he outlines a common-sense, first principles approach to thinking about tools for data quality.
In his view, tools that promote good data quality should come with the following features:
- They should enable early diagnosis and triage of data quality issues. Ideally, you should be notified the instant some number seems off.
- Tools should place demands on source systems and integration efforts to supply better data. Integrating with the tool should force data teams to reckon with a certain baseline of expected quality.
- They should provide users with specific descriptions of data errors when encountered during ETL. This is a tool-specific requirement, and it applies to both ETL and ELT-paradigm tools.
- There should be a framework for capturing all data quality errors. That is, data quality errors should be treated as a source of data itself, which means that:
- There should be a method for measuring data quality over time. And finally:
- Quality confidence metrics should be attached to final data.
If you’re wondering what a quality confidence metric is, you’re not alone. In the whitepaper, Kimball goes to great lengths to define quality confidence metrics as a statistical measure of expected value. For instance, if we are tracking data in 600 stores with 30 departments each, we should expect to receive 18,000 sales numbers each day. Kimball explains that all numbers that are three standard deviations above the historical mean could be logged into a separate error table. The data team should then take responsibility for investigating every event that is logged there.
(Kimball being Kimball, he describes the error table as a fact table in a dimensional star schema … because of course he would).
What That Looks Like Today
Are there more modern approaches to data quality? Yes there are. In a blog post a couple of weeks ago, Uber released details of their data quality monitoring system (which they’ve named DQM), which uses similar statistical modeling techniques to guarantee data quality at scale.
DQM is essentially a more sophisticated version of the same approach. It observes data tables and generates a multi-dimensional time series for each. For numeric data, DQM tracks metrics like average, median, maximum, and minimum. For string data, DQM tracks the number of unique values and the number of missing values in the strings they’re monitoring. On normal days, DQM spits out a quality score for each table, for display to data table users. But if DQM observes data that is abnormal compared to the historical patterns, it alerts all downstream reports and dashboards and warns both data consumers as well as data engineers to mistrust the numbers, and to check source systems for the potential problems.
I won’t summarise Uber’s blog post, because it does a great job of explaining the technical details in a concise, useful manner. But I couldn’t help but notice that Uber’s system fulfils all the requirements set out in Kimball’s original whitepaper.
- DQM enables early diagnosis and triage of data quality issues.
- DQM sits on top of all other data infrastructure, thus placing demands on source systems and integration efforts to supply better data.
- DQM stores specific descriptions of data errors that are encountered at any point in the data lifecycle.
- It captures all data quality errors.
- It generates time series data, which means that stored data quality scores are an easy way to measure data quality over time. And finally:
- Quality confidence metrics are always displayed alongside actual data.
DQM may be proprietary to Uber, but the approach is pretty darned inspirational to the rest of us in the industry. Which was probably what the authors intended with their blog post.
If you’re struggling with data quality at your company, take a step back to see if you can address your problems with tools or processes that feature even a handful of Kimball’s suggested principles. Don’t fret if you fail; data quality is an ongoing concern … which implies you won’t get it right all at once. You merely have to make sure you’re getting better at quality over time.
What's happening in the BI world?
Join 15k+ people to get insights from BI practitioners around the globe. In your inbox. Every week. Learn more
No spam, ever. We respect your email privacy. Unsubscribe anytime.
Confused about the complex analytics landscape?
Check out this book to bring yourself up to speed on the ins-and-outs of a contemporary analytics stack.
"I'm shocked to be telling you this next sentence: I read a free ebook from a company and actually loved it." - Data Engineer
|
OPCFW_CODE
|
memstat - Identify what's using up virtual memory.
First, the processes are listed. An amount of memory is shown along with a process ID and the name of the executable which the process is running. The amount of memory shown does not include shared memory: it only includes memory which is private to that process. So, if a process is using a shared library like libc, the memory used to hold that library is not included. The memory used to hold the executable's text-segment is also not included, since that too is shareable.
After the processes, the shared objects are listed. An amount of memory is shown along with the filename of the shared object, followed by a list of the processes using the shared object.
Finally, a grand total is shown. Note that this program shows the amount of virtual (not real) memory used by the various items.
memstat gets its input from the /proc filesystem. This must be compiled into your kernel and mounted for memstat to work. The pathnames shown next to the shared objects are determined by scanning the disk. memstat uses a configuration file, /etc/memstat.conf, to determine which directories to scan. This file should include all the major bin and lib directories in your system, as well as the /dev directory. If you run an executable which is not in one of these directories, it will be listed by memstat as ``[0dev]:<inode>''.
If you do the math, you'll see that ps and memstat don't always agree about how much virtual memory a process is using. This is because most processes seem to map certain shared pages twice. memstat counts these pages once, ps counts them twice. I'm not sure which is the ``right'' way to measure it.
The proc filesystem identifies files by their device number and inode number. To be useful, these numbers must be translated back into filenames. This requires searching the disk (and thus the awkward configuration file memstst.conf). There should be some way around this, perhaps by adding the dev/inode info to the locate db?
The stat system call returns a dev_t type, but the proc filesystem contains a device readout in the form of a string. I have improvised a routine to convert the device readout string into a dev_t, but I'm not sure it will work on all architectures.
It is possible to confuse memstat by using mmap in combination with a block-device. In the original version, memstat treated block devices just like any other file, and if you mmap'ed one of them, they would show up on the shared-object list. This worked for mmap'ed hard disks and floppies, but it produced absurd results with block devices like /dev/zero and /dev/mem. As a partial fix, memstat now ignores all mapped block devices, though this may cause memstat to ignore some memory usage.
We really ought to show some real-memory usage statistics, but it's just not there in /proc.
Memory used by the kernel itself is not listed.
Закладки на сайте
Проследить за страницей
Created 1996-2020 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру
|
OPCFW_CODE
|
# -*- encoding: utf-8 -*-
from utils import display_message, set_ams, start_loop, config_loop
config_loop()
from agent import Agent
from messages import ACLMessage
from aid import AID
from protocols import FipaContractNetProtocol
from filters import Filter
from pickle import loads, dumps
from time import sleep
#===============================================================================
# What is needed to create an agent with standardized protocols behaviours?
# First, the protocol class needs to be defined
# Second, this protocol class needs to be associated with the agent's
# behaviour
#===============================================================================
class ConsumerAgentBehaviour(FipaContractNetProtocol):
def __init__(self, agent, message):
super(ConsumerAgentBehaviour, self).__init__(agent, message, is_initiator=True)
self.bestPropose = None
self.bestBookStore = None
def handle_propose(self, message):
FipaContractNetProtocol.handle_propose(self, message)
display_message(self.agent.aid.name, 'Proposal Received')
def handle_all_proposes(self, proposes):
FipaContractNetProtocol.handle_all_proposes(self, proposes)
try:
self.bestPropose = proposes[0]
for propose in proposes:
content = loads(propose.content)
if content['how much is'] < loads(self.bestPropose.content)['how much is']:
self.bestPropose = propose
response = self.bestPropose.create_reply()
response.set_performative(ACLMessage.ACCEPT_PROPOSAL)
response.set_content('Proposal Accepted')
self.agent.send(response)
for propose in proposes:
if propose != self.bestPropose:
response = propose.create_reply()
response.set_performative(ACLMessage.REJECT_PROPOSAL)
response.set_content('Proposal Rejected')
self.agent.send(response)
except:
display_message(self.agent.aid.name, 'Unable to process because no message has returned.')
def handle_inform(self, message):
FipaContractNetProtocol.handle_inform(self, message)
display_message(self.agent.aid.name, 'Purchase Approved')
class ConsumerAgent(Agent):
def __init__(self, aid, bookStores, order):
Agent.__init__(self, aid)
self.bookStores = bookStores
self.order = order
cfp_message = ACLMessage(ACLMessage.CFP)
cfp_message.set_protocol(ACLMessage.FIPA_CONTRACT_NET_PROTOCOL)
for i in self.bookStores:
cfp_message.add_receiver(i)
cfp_message.set_content(dumps(self.order))
behav_ = ConsumerAgentBehaviour(self, cfp_message)
self.addBehaviour(behav_)
if __name__ == '__main__':
agents = []
order = {'title' : 'The Lord of the Rings', 'author' : 'J. R. R. Tolkien', 'qty' : 5}
#consumidor = ConsumerAgent(AID('Lucas@192.168.0.100:2004'), ['Saraiva', 'Cultura', 'Nobel'], order)
consumidor = ConsumerAgent(AID('Lucas'), ['Saraiva', 'Cultura', 'Nobel'], order)
consumidor.set_ams()
agents.append(consumidor)
start_loop(agents)
|
STACK_EDU
|
Startup based on my own side project open-sourced through my employer
I work at a FAANG-size company (moonlighting friendly in theory) and have a side project that I've built mostly in my free time, a bit during work hours, and had it recently approved by the legal department for open-sourcing and it's now available in public on GitHub with an MIT license for everything.
Since the MIT license allows anyone to do whatever they want with the project, including commercial use and creating derivatives, I'm tempted now to create a startup based on the project - basically fork it and work in private on it, extending and improving it a lot (make it worth buying it compared to OSS version), then rebrand and launch it as a subscription app.
The moonlighting policy is fairly permissive, but also somewhat vague. Both my contract and current HR guidance state that any invention is mine if done outside work hours, not using their hardware AND doesn't compete with the company and harms it's business interests - the vague part, especially with a company that has it's fingers in dozens of different software. It's not needed to inform manager or HR about moonlighting, just "use your judgement if it fits the policy". WA state.
There are several concerns I have and I'm also curious if others have dealt with a similar situation and have some advice:
my fear is that even if I'm following the moonlighting policies from now on, they could later still claim ownership of my work based on the pre-OSS work. If someone else outside the company would fork the OSS repo there should be no problems given the MIT license. My hope was that starting now from a fork creates a clean slate for me, since the previous work was done in part during work hours on their PCs. Open-sourcing through my employer was the only way not to lose it completely in case I switch companies (and remain abandoned, no one cares enough about it).
the company doesn't have a stand-alone product it sells similar to my app, but it does have a module part of a much larger product. That product is far from being the money-maker for the company (it's seen as a cost center). People are certainly not buying the whole product because of that module, it's a nice thing to have. There are several other companies that do sell stand-alone apps similar to mine though, there is a market for it. I assume that module is enough to argue that my app is a direct competitor if one really wants to though?
Given this, feels like safest would be to quit my job before even starting to work on this, but of course I'd like to avoid that, with the startup maybe not working out - concern 1) could also mean this wouldn't actually help?
Switching first to another company is another option, but seems others have even worse or no moonlighting and again concern 1). Last option, work on it while employed and hope I won't get sued. Maybe quit around the time I'm close to be done with a v1 and start marketing to reduce risk a bit. Any other options?
As for the company itself, I'm not seeking funding and don't see it getting to some multi-million/y business, at best enough to make a full-time job out of it. Maybe that's enough to not make it worth suing. I will consult with an attorney too, wanted to check first with other entrepreneurs in case it's completely hopeless.
You have given a lot of background information. Please add 1 or 2 concise sentences (with a '?') that summarize your question!
Not clear on what FAANG means. The project was mostly developed on your own time which indicates that some company time was involved, e.g. the company has some interest/ownership in the project - this needs to be settled with the company lawyers, not with an on-line group. Any forks of the project will be subject to the license terms of the initial project that is owned by the employer.
@doneal24 (or for any readers who don't know) "FAANG-size" means a large company, with several tens of thousands of employees and probably a business presence in many countries around the world. The kind of company that would definitely have lawyers capable of providing specialized guidance for this situation. (FAANG stands for Facebook, Amazon, Apple, Netflix, Google, but that doesn't really matter here.)
|
STACK_EXCHANGE
|
<?php
class CorpusTest extends PHPUnit_Framework_TestCase
{
public function testWordCount()
{
$corpus = new TomLerendu\SentimentAnalyzer\Corpus();
$corpus->addPositiveWord('two', 2);
$corpus->addPositiveWord('hello');
$corpus->addPositiveWord('hello', 1);
$corpus->addPositiveWord('test');
$corpus->addNegativeWord('hello', 5);
$corpus->addNegativeWord('the');
$corpus->addNegativeWord('a', 1);
$this->assertEquals(2, $corpus->getPositiveCount('two'));
$this->assertEquals(2, $corpus->getPositiveCount('hello'));
$this->assertEquals(1, $corpus->getPositiveCount('test'));
$this->assertEquals(0, $corpus->getPositiveCount('word'));
$this->assertEquals(5, $corpus->getNegativeCount('hello'));
$this->assertEquals(1, $corpus->getNegativeCount('a'));
$this->assertEquals(1, $corpus->getNegativeCount('the'));
$this->assertEquals(0, $corpus->getNegativeCount('word'));
}
public function testProbability()
{
$corpus = new TomLerendu\SentimentAnalyzer\Corpus();
$corpus->addPositiveWord('the');
$corpus->addPositiveWord('probability');
$corpus->addPositiveWord('is');
$corpus->addPositiveWord('one');
$corpus->addPositiveWord('in');
$corpus->addPositiveWord('six');
$this->assertEquals(1/6, $corpus->getPositiveProbability('the'));
$this->assertEquals(1/6, $corpus->getPositiveProbability('six'));
$this->assertEquals(0, $corpus->getPositiveProbability('hello'));
}
public function testLoadingDataFile()
{
$corpus = new TomLerendu\SentimentAnalyzer\Corpus('../tests/test-corpus.json');
$this->assertEquals(11, $corpus->getPositiveCount());
$this->assertEquals(13, $corpus->getNegativeCount());
$this->assertEquals(5, $corpus->getPositiveCount('the'));
$this->assertEquals(4, $corpus->getNegativeCount('it'));
$this->assertEquals(2/11, $corpus->getPositiveProbability('good'));
$this->assertEquals(2/13, $corpus->getNegativeProbability('bad'));
}
public function testRatios()
{
$corpus = new TomLerendu\SentimentAnalyzer\Corpus('../tests/test-corpus.json');
$word1 = $corpus->getRatios('the');;
$this->assertEquals(0.54166666666667, $word1['positive']);
$this->assertEquals(0.45833333333333, $word1['negative']);
$word2 = $corpus->getRatios('bad');
$this->assertEquals(0, $word2['positive']);
$this->assertEquals(1, $word2['negative']);
$word3 = $corpus->getRatios('great');
$this->assertEquals(1, $word3['positive']);
$this->assertEquals(0, $word3['negative']);
$word4 = $corpus->getRatios('no');
$this->assertEquals(0.5, $word4['positive']);
$this->assertEquals(0.5, $word4['negative']);
}
}
|
STACK_EDU
|
In this tutorial you will use SYD to create a patch which will simulate the sound of a large BELL. Using Additive Synthesis techniques. As a background to understand the procedure outlined below, you should read Bell Characteristics.
To synthesize a bell using the application SYD, you will need to add 11 operators: the fundamental frequency (1 operator), 8 separate s (8 operators), an amplitude envelope (1 operator) and a mixer operator (1 operator).
If you have not already done so, please read, Additive Synthesis.
Using a Fundamental Frequency of 200 Hz, here is a formula for creating a simple bell sound using 9 frequencies called PARTIALS. Combine sine waves with these frequency ratios:
200 x 4.07 814 Hz 8th wave 200 x 3.76 752 Hz 7th wave 200 x 3 600 Hz 6th wave 200 x 2.74 548 Hz 5th wave 200 x 2 400 Hz 4th wave 200 x 1.71 342 Hz 3rd wave 200 x 1.19 238 Hz 2nd wave 200 x .92 184 Hz 1st wave 200 x .56 112 Hz
Figure 1: Frequency Ratios of a Simple Bell
is a frequency in the spectrum of a sound which is NOT in the Natural Harmonic Series. Sometimes it is also called a “detuned harmonic” or an “enharmonic.” In the chart above, the frequencies that ARE part of the Natural Harmonic Series are represented by whole numbers (2, 3). All the other frequencies are PARTIALS. Often in a bell sound, the true FUNDAMENTAL is MISSING altogether or a
is hear (perceived) by the listener instead. The lowest frequency here (the 1st wave) is a
; the 2nd frequency is close enough to the Fundamental of the Natural Harmonic Series to be considered a detuned partial.
Apply this envelope:
Figure 2: Amplitude Envelope of a Simple Bell
To begin, create a new SYD patch and add 9 Oscillator Operators, 1 Envelope Generator and 1 Mixer Operator in the configuration shown below:
Figure 3: SYD Patch for Simple Additive Synthesis Bell
The duration of the Sound Output should be 6 seconds. The duration of the Amplitude Envelope should mach the duration of the sound output (6 seconds):
Figure 4: Amplitude Envelope Settings
The Fundamental Frequency of the Bell should be 200 Hz. Please assign the Oscillator Operator at the BOTTOM of the SYD window to the Fundamental Frequency. All the other Oscillator Operators will be assigned as partials.
Figure 5: Fundamental Frequency and 8 Partials
Set the Fundamental Oscillator and the 1st partial Oscillator as indicated below:
Figure 6: Fundamental Settings
Figure 7: First Partial Settings
Continue setting the frequency ratios of the remaining operators according to the table listed in Figure 1 above. When you have finished, click the SYNTHESIZE button in the bottom left corner of the patch window. Then click PLAY to listen to the sound.
In this Simple Bell patch, all the partials have the same amplitude provided by the LEVEL setting in the Amplitude Envelope. A better scenario would be to have separate amplitudes for each partial such that the FUNDAMENTAL has the loudest amplitude (.5), the 1st partial is 50% of the amplitude of the FUNDAMENTAL (.25), the 2nd partial is 50% of the amplitude of the 1st partial (.125)…… and so on until you finish all 8 partials. This follows the SPECTRUM of the NATURAL HARMONIC SERIES where the amplitude of each harmonic is inversely proportional to its position in the series.
Experiment with different ways to efficiently set these amplitudes.
In natural sounds, the higher partials tend to have shorter amplitude envelopes (they don’t ring as long as the lower partials):
Experiment with different ways to efficiently set separate times for the amplitude envelopes of the various partials. For example, try experimenting with the FUNCTION operator and the EXPRESSION operator. Using these operators, you can pass an EXPRESSION to ALL the amplitude fields simultaneously.
In a REAL bell, one side of the bell may not be cast to the same thickness as the other side. Consequently, there will be mall variations in frequencies which produce BEATING at various frequencies. Experiment with using pairs of slightly detuned operators (“enharmonic” partials) to simulate this BEATING effect.
Further information on SYNTHESIZING BELLS can be found on the Sound On Sound: Synth Secrets site: Synthesizing Bells.
If you have not already done so, please read Bell Characteristics.
|
OPCFW_CODE
|
Doc design proposal to use a shared CG-NAT address space by default.
Just throwing alternate option. If nexodus doesn't allow user to allocate the overlapping subnet, that can also address this issue? That way we can keep the same model, and share the devices across multiple org as well. do you issues with that approach (Except we need to do some work with IPAM)?
If nexodus doesn't allow user to allocate the overlapping subnet
If an org is assigned with <IP_ADDRESS>/24, the same subnet is not going to be assigned to the another org. If another org is trying to assign <IP_ADDRESS>/16, it won't be allowed as well, because <IP_ADDRESS>/24 is assigned to the org. We can provide list of available subnets to the users as well to pick from.
We are allocating one big subnet for sharing device, but also allowing users to create their own private subnet (bullet 5), that will restrict them from sharing the device. I am wondering rather than enforcing the big subnet on all the user (and assuming that they want to share the devices), can't we leave it to user, to allocate a big subnet if they really want to share the devices across it's organization. That will give them freedom of how they want to manage their device (shared, private etc)?
I feel like the "Simple UX Above Features" guiding principle applies here.
https://docs.nexodus.io/development/development/
In general I agree with that principal, but subnetting is one of core feature of Nexodus, so It's not just only a matter of simplicity here. Nexodus is at L3 layer, where Network Admin's role is pretty critical. Specifically for the on-prem deployment they would prefer to have some control over subnetting as per their requirements, so I am not very convince having a big global range as a default is a good choice for that scenario, but I might be wrong as well, so would love to hear others opinion on it.
Secondary concern is big ranges generally means a big blast radius, for
Scale: Can we handle mesh network of 4M devices (that's possible worst case scenario, which practically we might not hit, but it's a attach surface)
Security: one compromised device can impact larger number of connected devices, specifically devs are not really good with keeping tab on where the device is shared etc (we all have shutdown the firewall while developing stuff).
Performance: Although It's a big ip address space, but the sharing of device is not by default, it's something that user need to initiate, so every IP address in this space need to be dealt with individually (storage, iptable rules etc), so possibly 4 millions entries in iptables, rather than subnet based entries ?
Migration: In case we need to change our subnetting strategy - it's easier to open the subnets, rather than restricting the wider subnet.
I think whatever we decide here, leaving a flexibility in deciding the subnet range probably would be helpful in long run (specifically for the on-prem scenarios)
@vishnoianil so I'm just proposing that the default orgs use the simple shared IPAM namespace. Custom orgs can be configured to have a non-shared IPAM namespace. So those advanced network admins, can get that if they really want it.
@russellb updated to use the template layout.
What is the API impact? Is it just that the create-org API call now has the CIDR as optional?
Correct. We add a flag, to the create org call, which when set, you get the previous behaviour of using a non-shared IPAM on your custom defined CIDR.
What would that show when it's using the default behavior? Would it be blank? Or is there some other way to indicate "using the global range, not an org-specific one" ?
#1007 implement it by adding a private_cidr=true field the org to indicate it's not the shared CG-NAT ipam that's in use.
SGTM. I feel like this drives network slicing/VPCs and DNS up the priority ladder for next features. Slicing being lower hanging fruit that could probably be done in the same implementation as this. DNS being a heavier lift. Ty!
|
GITHUB_ARCHIVE
|
After the transaction lines are grouped into the revenue contract based on the RC grouping templates, the transaction lines within one revenue contract must be grouped into promises that are made to customers. These promises are referred to as performance obligations (POB) by ASC 606. Revenue can be recognized as or when the performance obligation is satisfied.
In Zuora Revenue, performance obligations are defined by POB templates and POB assignment rules. A performance obligation template determines the revenue recognition pattern (trigger and timing) for each distinct performance obligation and might also define cost treatment, variable consideration assignment, and any performance obligation dependencies if applicable.
Revenue release event
When you create a POB template, you must specify the revenue release event for the performance obligation. The revenue associated with a performance obligation can be released in one of the following ways:
- Upon Event - For example, upon shipment by quantity.
Upon Billing (Billed Release) - This option recognizes the exact billed percentage with respect to the line when billing data is collected to revenue.
Upon Billing (Full Booking Release) - This option recognizes the total booking amount when a bill is collected, irrespective of whether the billed value is partial or full.
- Upon Booking (Full Booking Release) - This option recognizes the booking amount when a line is collected.
- Upon Manual Release - For example, a revenue user manually performs the release of revenue for the performance obligation.
- Upon Expiry - For example, after 30 days from the sales order book date.
- Upon Satisfying a POB Dependency - For example, when a parent POB within the same revenue contract is satisfied.
Predefined release events are provided for you to select when you create a POB template. You can also create your own revenue events in Setups > Application > Event Setup based on your business needs. Both the predefined and user-defined revenue events can be displayed when you create a POB template. For more information about release event setup, see Event Setup.
If the POB release is Upon Event with process type as quantity, any manual or mass action of revenue deferral or release will break the integrity of revenue recognized with event processing.
To perform revenue recognition, a ratable method must be specified for each performance obligation within a revenue contract. A ratable method describes how Zuora Revenue will schedule revenue based on the triggered release events and how Zuora Revenue interacts with the start and end dates that come in with the sales order transaction line.
For example, when a revenue action such as Upon Delivery By Qty triggers a release of revenue on a performance obligation, the ratable method that is assigned to the POB template determines whether the revenue of a performance obligation is scheduled for immediate recognition or whether the revenue is scheduled over a duration of time such as contract ratable. The Contract Ratable method indicates that the release of revenue is based on the revenue start date and end date of the sales order. When you create a POB template in Zuora Revenue, you must select one ratable method.
For information about the predefined ratable methods, see Predefined POB ratable methods.
POB assignment order
Zuora Revenue identifies the correct performance obligation template to assign to the transaction lines within a revenue contract by attempting all available POB assignment rules in the following order:
Assignment - By Attributes
Assignment - By Item/SKU#
If Zuora Revenue does not find a rule to assign a performance obligation template, Zuora Revenue assigns the Auto POB template by default.
For Zuora Revenue to automatically group transaction lines into POBs within a revenue contract, complete the following tasks:
Create POB template. For information, see Create POB template.
Define POB assignment rules. For information, see Define POB assignment rules.
For information about consolidated POBs, see Consolidated performance obligations.
|
OPCFW_CODE
|
As you know, System Center Essentials (SCE) provide you with both Update Management and Software Deployment. Since SCE uses Windows Server Update Services (WSUS) as underlying technology for both of these functions, the configuration of the client detection time and interval is done through Group Policy (this is done when you run the Feature Configuration Wizard in SCE and select domain policy). The default values for this is:
- Schedule install time = 03:00
- Automatic Updates detection frequency = Every 22 hour
Note that both of these can be changed to fit you environment.
So, if all of your machines are online at this time they will get all of the updates and all of the applications you approved for them. The problem I have seen is when you see (through the console) that one or more clients "Needs" updates or applications and you just want to "Click Install Now". As default, this is not possible in SCE and the option you have it to use Remote Desktop or visit the computer. The two tasks you have by default in SCE is:
- Detect Software and Updates Now – This tasks only download the updates to the client and inform the user that they are available but the user need to click "Install" or wait for the schedule time to apear
- Collect Inventory – This task actually do exactly the same as above
The solution to this problem is to build your own task that run a script that both download and install updates and software and then report back what’s been installed.
- Start the SCE console, click Authoring and then expand Management Pack Objects node
- Right-click Tasks and select to create a new task
- In the Create task wizard – Task Type, select Agent Task and Run a script and then select your destination management pack and click Next
- In the Create task wizard – General Properties, input a task name and a description and choose target (I would recommend to use the Windows Computers as target). Click Next
- In the Create task wizard – Script, select as below and then click Create:
- File Name = WSUS.vbs
- Time Out = This depend on the time it will take to install the updates. In my tests I have selected 1 hour
- Script = Se below
‘ Written in 2007 by Harry Johnston, University of Waikato, New Zealand.
‘ This code has been placed in the public domain. It may be freely
‘ used, modified, and distributed. However it is provided with no
‘ warranty, either express or implied.
‘ Exit Codes:
‘ 0 = scripting failure
‘ 1 = error obtaining or installing updates
‘ 2 = installation successful, no further updates to install
‘ 3 = reboot needed; rerun script after reboot
‘ Note that exit code 0 has to indicate failure because that is what
‘ is returned if a scripting error is raised.
Set updateSession = CreateObject("Microsoft.Update.Session")
Set updateSearcher = updateSession.CreateUpdateSearcher()
Set updateDownloader = updateSession.CreateUpdateDownloader()
Set updateInstaller = updateSession.CreateUpdateInstaller()
WScript.Echo "Searching for approved updates …"
Set updateSearch = updateSearcher.Search("IsInstalled=0")
If updateSearch.ResultCode <> 2 Then
WScript.Echo "Search failed with result code", updateSearch.ResultCode
If updateSearch.Updates.Count = 0 Then
WScript.Echo "There are no updates to install."
Set updateList = updateSearch.Updates
For I = 0 to updateSearch.Updates.Count – 1
Set update = updateList.Item(I)
WScript.Echo "Update found:", update.Title
updateDownloader.Updates = updateList
updateDownloader.Priority = 3
Set downloadResult = updateDownloader.Download()
If downloadResult.ResultCode <> 2 Then
WScript.Echo "Download failed with result code", downloadResult.ResultCode
WScript.Echo "Download complete. Installing updates …"
updateInstaller.Updates = updateList
Set installationResult = updateInstaller.Install()
If installationResult.ResultCode <> 2 Then
WScript.Echo "Installation failed with result code", installationResult.ResultCode
For I = 0 to updateList.Count – 1
Set updateInstallationResult = installationResult.GetUpdateResult(I)
WScript.Echo "Result for " & updateList.Item(I).Title & " is " & installationResult.GetUpdateResult(I).ResultCode
If installationResult.RebootRequired Then
WScript.Echo "The system must be rebooted to complete installation."
WScript.Echo "Installation complete."
Open the Computer or Monitoring View and select the client/server you want to update and then select the task that you created above.
Example of the result of the task on a computer that needs one update and the installation is successfull and the computer needs to be restarted
Example of the result of the task on a computer that doesn’t have any updates
|
OPCFW_CODE
|
Quansight Labs is an experiment for us in a way. One of our main aims is to channel more resources into community-driven PyData projects, to keep them healthy and accelerate their development. And do so in a way that projects themselves stay in charge.
This post explains one method we're starting to use for this. I'm writing it to be transparent with projects, the wider community and potential funders about what we're starting to do. As well as to explicitly solicit feedback on this method.
Community work orders
If you talk to someone about supporting an open source project, in particular a well-known one that they rely on (e.g. NumPy, Jupyter, Pandas), they're often willing to listen and help. What you quickly learn though is that they want to know in some detail what will be done with the funds provided. This is true not only for companies, but also for individuals. In addition, companies will likely want a written agreement and some form of reporting about the progress of the work. To meet this need we came up with community work orders (CWOs) - agreements that outline what work will be done on a project (implementing new features, release management, improving documentation, etc.) and outlining a reporting mechanism. What makes a CWO different from a consulting contract? Key differences are:
- It must be work that is done on the open source project itself (and not e.g. on a plugin for it, or a customization for the client).
- The developers must have a reasonable amount of freedom to decide what to work on and what the technical approach will be, within the broad scope of the agreement.
- Deliverables cannot be guaranteed to end up in a project; instead the funder gets the promise of a best effort of implementation and working with the community.
Respecting community processes
Point 3 above is particularly important: we must respect how open source projects make decisions. If the project maintainers decide that they don't want to include a particular change or new feature, that's their decision to make. Any code change proposed as part of work on a CWO has to go through the same review process as any other change, and be accepted on its merits. The argument "but someone paid for this" isn't particularly strong, nor is one that reviewers should have to care about. Now of course we don't expect it to be common for work to be rejected. An important part of the Quansight value proposition is that because we understand how open source works and many of our developers are maintainers and contributors of the open source projects already, we propose work that the community already has interest in and we open the discussion about any major code change early to avoid issues.
|
OPCFW_CODE
|
Compile cpp without corresponding header
I've just been given my first real C++ application on the job after working through some books learning the language.
It was my understanding that your cpp source files required the cooresponding header, yet one of the libraries in my project is building fine with a number of cpp files that DO NOT include the cooresponding header. This particular cpp implements a class found in a header that has a different name and a number of other pieces of code beyond just the original class declaration.
How is it that the cpp can compile functions belonging to a class that it has no knowledge of?
Can the implementation of these functions be compiled independently and are simply called when a client application using the library (and including the header with the class declaration) calls the corresponding member function? If this is the case, how is the implementation binary referenced by the client application?
(I assume this is the linker...but I would love to have this cleared up).
I anticipate the answer may expose a misunderstanding of mine with regard to the include and compilation process, and I'd really like to learn this aspect of C++ well. Thank you!
A header file is just textually included in the translation unit. There's no magic. On the contrary header files are astoundingly crude relics of the 1970s. You don't need header files at all, not that that is a good idea to take that idea seriously. If you have a translation unit that does not offer any functionality to other actors in the system then there would be no need for a header. A good example would be a small translation unit containing your main function.
have a look at this SO qn about compilation and linking : http://stackoverflow.com/questions/6264249/how-does-the-compilation-linking-process-work
When a c++ source file is compiled the first stage it goes through is preprocessing.
When the include directive is reached the file is found and the entire contents of the file, whatever that may be is included into the source file, as if it had been written in the source file itself.
You will be able to define any function from a class in any source file that includes the class's declaration, this is the source file "knowing" about the class / function".
There's also no requirement that the contents of a header and a source file will have any relationship. It's widely considered to be very good practise however.
The implementation of each compilation unit (a source file) is compiled independently. Any function definition could be placed in any compilation unit, and it would make not difference whatsoever. When the compilation units are linked together the usages of every declaration are matched to all the definitions.
The only other pattern that some people might use other than the 1:1 relationship between source files and header files (that I can think of) is that the header files each describe a class and each source file would implement a collection of related functionality. But this is a bad idea (in my opinion) because it would encourage the definitions of various classes to because highly coupled.
These are several questions. You should try to split these.
The name of the files where something is declared is not relavant. The compiler gets the preprocessor output independent from the files that have been read by the preprocessor. The compiler might be use some file/line information in the preprocessed file to issue more readable diagnostic messages.
When you declare a class in a header file and copy that declaration to a implementation file everything works fine as long as you don't change one of the copies of the declaration. This is bad dangerous and should be avoided. Declare anything always once.
When you compile an implementation of a class member function you get a function that the linker can link to your client program. A good tool chain is able to link only the functions that are accessed. So you can separate the interface (in the header files) from the implementation that is provided in a static library. The linker will add each object module in the library to your executable until all symbol references are resolved.
|
STACK_EXCHANGE
|
Uh oh... what should I write. This is the first blog post ever about Ternadim, but I'll get down to business right away.
During the past two months, Ternadim got a lot of not-so-obvious additions, tweaks and fixes. Here are some hilights:
- Map generator now generates cliffs (they're awesome)
- Grass and sand now have visible grids. I have been alternating between not having and having a grid, and I think I will now stick to having a visible grid.
- Fog of war is now based on line of sight - you can hide units behind trees, hills and in pits.
- Now the AI respects the fog of war! If you stay hidden, they won't know where you are - but they will search for you.
- The mission editor was almost completely rewritten - it is now based on a simple keyboard-based interface. The old interface was too clumsy for actual usage; using this one is quite enjoyable once you get the hang of it.
- And now there are two re-made campaign missions for each faction, representing the somewhat altered style in which they all will be created.
So then, time-travelling to about two weeks ago, I was once again playing a randomly-generated Ternadim mission just for the fun of beating the silly AI in some unexpected situation (if you haven't tried, you should!), and...
Well honestly I don't even know when I first thought of it, but in any case I decided that making a very streamlined online mission and high score repository would fit the usage very well.
So there comes...
...this thing called "Ternadim Online".
The basic idea is obvious: You play stuff, and if it appears to be a good challenge, you input some nickname and press "Upload" to send the mission (wherever it came) to the Ternadim server. Or if you just finished it, the mission is sent along with the high score, and the server sorts out whether it's a newly uploaded one or not.
At any time, anyone owning a copy of Ternadim can browse and play the uploaded missions.
There are three things to point out:
- This system does not do anything automatically in the background. You only contact the server if you press an "Upload" or "Download" (or the "Online") button.
- It does not use usernames or passwords. When being downloaded, each client gets their own client key (the key.dat file), which the server will check. These can then be used for mitigating abuse. You're free to use any nickname or remain anonymous.
- The user interface is brutal: You directly see hashes (checksums) of the missions and the client keys.
Mission file format
Since Skirmish Alpha 2 (a looong time ago), the file format was reworked. It is now based on JSON and should be quite easily interoperable with third-party tools if needed. There's some info about it in the README.txt file distributed with Ternadim. However, the format is still subject to changes.
Here you can download a copy of Ternadim Online Alpha 2:
- ternadim_online_alpha_2_win32.zip (2.9MB zip, win32/wine)
Unless I decide to release a yet another alpha, this alpha version will have access to the server all the way until the official release.
You can come and chat about it on IRC at #8dromeda @ Freenode. (webchat) There's no forum or anything at least yet. (Reddit works too, though.)
Next I'll be putting all of my silly ideas into the campaign mode of the game, possibly adding some other stuff along the way.
The release will be absolutely glorious!
Once finished to version 1, Ternadim will be sold as pay-what-you-want. You will get an online key if you pay $1 or more.
I have decided to use Gumroad as the selling platform - it's very indie-do-whatever-you-want friendly and has just the right amount of integration ability. One thing worth noting is that it does not support PayPal, but it's credit card checkout is extremely simple. (I tested it; now I wonder why anyone would use PayPal for something like this...)
- A of 8Dromeda
Devlog post of Ternadim: http://t.co/iFT1G2pcz5— 8Dromeda Games (@8dromeda) June 16, 2013
- 2015-09-15: Developing Soilnar: Part 5 (days 37-54)
- 2015-03-23: Implementing things #4 (days 10-36)
- 2015-02-28: Implementing things #3 (days 6-9)
- 2015-02-23: Implementing things #2 (days 3-5)
- 2015-02-21: Implementing things
- 2015-02-19: The Soilnar plan of 2015
- 2015-02-12: What the fuck?
- 2013-11-24: Random Dev Notes #1
- 2013-10-26: Soilnar Progress
- 2013-10-11: Third Iteration: Intrusion Underground
- 2013-10-02: Check Twitter
- 2013-08-09: Scaling: The MMO Kind Of
- 2013-07-27: Soilnar July Update Part 4/4
- 2013-07-26: Soilnar July Update Part 3: Bitaeli
- 2013-07-24: Soilnar July Update Part 2: Network Optimization
- 2013-07-23: Soilnar July Update Part 1: Scaling
- 2013-06-20: Miscellaneous Questions
- 2013-06-16: Ternadim Progress
- 2013-04-01: Soilnar April Update
- 2013-02-02: Project Soilnar
- 2013-01-31: First Post!
|
OPCFW_CODE
|
Often, we think that the primary obstacles to innovation are technological. Can we run the right analytics on this data? Will we be able to maintain service performance given the potential scale of our peak workload? What do we have to do to make this system more secure?
But nothing happens without money. And, when it comes to software, monetary requirements are largely dictated by vendor licensing.
Unfortunately, software vendors are struggling to keep up with the reality of the New IT, especially when it comes to virtualization and the cloud.
Virtualization presents new licensing challenges, because it changes the way IT organizations utilize machines and processor cores. So it has rendered conventional models of utilization for a given instance of a piece of software obsolete.
Some vendors have responded to this change by treating virtual machines somewhat like they once treated physical hosts. Other vendors continue to focus on the physical host-but place restrictions on how many VMs can reside on those hosts. Others still refuse to embrace the notion of virtualization in any meaningful way.
This fragmented licensing landscape is a real problem for IT. For one thing, licensing schemes that make it excessively expensive for IT to leverage virtualization have a chilling effect on data center evolution. For another, disparate licensing schemes add to already burdensome IT complexity. In addition, a large part of the challenge in predicting software costs is due to the consistent changing of licensing schemes in unpredictable ways and at unpredictable intervals by vendors.
Things get even trickier when IT starts to adopt private, public and hybrid clouds. If I have an enterprise license for my database, why should my cloud provider have to charge me for hosted instances of that database? Conversely, if I am paying for cores and/or VMs, how do I make sure I stay in compliance as I automatically and dynamically move workloads between my own data center and one or more cloud providers?
These are, of course, technological issues as well as monetary ones. To effectively govern software compliance across hybrid cloud environments, IT needs open, standardized protocols for exchanging information about entitlements and usage. These protocols will enable IT, software vendors and cloud providers to provide each other with the transparency and automation necessary to track compliance and make fact-based decisions about appropriate licensing fees.
But protocols alone won't do the trick. Software vendors, cloud providers and IT organizations also have to decide upon policies that will appropriately compensate developers for their IP without hampering IT's ability to take advantage of the tremendous innovation taking place in the delivery of digital infrastructure capacity. Until those policies are clarified, licensing will remain the land mine that keeps IT from striding as boldly into virtualization and the cloud as it ought to.
How are you handling software licensing as you move to the cloud? Have you run into any major problems or cost snafus? Feel free to share your experiences and solutions below.
Chris O'Malley is CEO of Nimsoft. He has devoted 25 years to innovation in the IT industry -- most recently growing businesses in cloud and IT Management as a Service solutions. Contact Chris via the comments below or via Twitter at @chris_t_omalley.
This story, "Software Licensing Slows Innovation" was originally published by Computerworld.
|
OPCFW_CODE
|
Hi everyone, I am new to this site but not new to scjp exam. Unfortunately, I failed on the first try for scjp 1.2. I know the reason is that I only reviewed the exam book without doing any real practice.(I didn't take any single mock exam/questions)Now, I Really want to wrap up and retake the exam. need ur help on thest two questions: 1. Do i need to pass scjp1.2 before taking scjp1.4? (can I take scjp1.4 this time?) 2. Due to some reason, I have to take the exam soon. I have already reviewed exam books, I know there is plenty of practice exams but I only have time to take some of them. can anyone recommend mock exams which are really good and must-do (difficult level is similar to real exam)from your own experience?
Hi, I don't think there is any need whatsoever that one should re-appear for SCJP1.2 to take SCJP1.4 as long as you are comfortable with the new objectives listed for SCJP1.4. In the first place forget bad xperience and start afresh..so forget you have even attempted 1.2 . Regarding the mock exams....u have come to the right place..JavaRanch mocks are indispensible source for information on this.You come across many links to mock exams.Not but the least Whizlabs is also good..their objective wise test are good. All the best.. Hope this answers your doubts..
"redshirt me"- Couple things.... 1st thing: Welcome to the JavaRanch! We like to keep a nice professional lookin' image... (we don't want anyone to show up the Moose). So, can you please adjust your displayed name to match the JavaRanch Naming Policy? Basically it should be a believeable and not obviously fictitious first and last name. You can change it here. 2nd thing: You can find a whole list of mock exams here in our FAQ. the most popular free exams are Marcus Green's and Dan Chisholm's Good Luck! and again welcome to the JavaRanch!
My understanding is that Sun recognizes the different releases as separate certifications. In other words, you can have a 1.2 certification, but not be certified in 1.4 (of course). The reverse is also true, where you can have a 1.4 certification but not a 1.2. Whether you need one or the other probably depends on your objectives and what your employer wants. I personally prefer the 'latest and greatest' approach, but many would take point that understanding 'older' (even by a few months) technology has its place - for example, if you will be working with applets much. If you have to stay compatible with older browsers, then that might change what you'll study. Just my thoughts!
-nothing important to say, but learnin' plenty-
|
OPCFW_CODE
|
Jump to navigation Jump to search
- 阅读 microformats.org 自愿公共领域声明并考虑将 Template:public-domain-release加至您的用户页面,以将您在 microformats.org 所作的贡献发布至公共领域。
- 请尝试使用网络中继聊天(优先)或邮件列表(请先阅读Mailing Lists)。
- If you write something opinionated, sign it with your username - you can easily do so with a datetimestamp in MediaWiki with four ~s, e.g.: ~~~~.
- 请遵守页面的 命名规则。
- 请勿使用Talk页面,参见 #3。
- 标题可以使用 <h1> <h2> 等标记,以防止其在目录中出现。如果您在编辑文章的过程中遇到这种标题,请勿用 "=" 或 "=="格式将其替代。
- 请勿修改标题文字, 哪怕只是大小写或是为了使之符合上述标准。 标题时常被当作永久链接,而更改标题会破坏这种链接关系。所以在创建标题时请谨慎。 You may change a heading if you are careful to leave an empty
<span id="oldheadingID"></span>in front of the heading with oldHeadingID set to the necessary value to maintain heading permalinks.
- Avoid global editorial wiki changes / edits (e.g. the same or similar edits applied to numerous pages, say, more than a dozen or so pages). If you have an opinion on how to globally improve something stylistically or editorially on the wiki, please add it to your section on the To Do page, and then perhaps ask the community using the microformats-discuss mailing list what folks think of it. Interpret absence of response(s) as disinterest and thus implicit rejection. Admins may from time to time do global wiki changes to remove spam, repair damage done by other global wiki edits etc.
- Please avoid simple contradictory responses such as "No" to questions and issues. Instead provide at least a short sentence with a reason which provides information beyond what is provided in the question or issue.
- Do not remove "red links", nor create empty / placeholder "..." pages for them just to make them not red. The red links usefully communicate a need or a desire for that page to exist, and the person expressing that desire may not be the same person that is able to take the time, or has the necessary skill/background to draft such a page. The links to pages not yet created often serve as an effective (and easy to execute) "to do" list. Removing those links makes it harder, less convenient to do so. (One exception noted so far: red links to non-existent microformats e.g. "hbib", should be delinked, as it is desirable for it to be harder/less convenient to create new microformats). Finally, as such links do provide information, they are not redundant.
- Do not use the MediaWiki "Categories" mechanism. As with "Talk" pages, this community does not use all the features of MediaWiki.
- Do not create new "User:" links by hand. User: links should only created as a result of users actually signing their edits with ~~~ or ~~~~. That way each User: page will correspond to an actual login, rather than accidentally linking to a page which doesn't represent a login. If you see a red link which appears like it should be a User: link, e.g. [[DavidJanes]], rather than editing the link in place, create a redirect at the destination of the link to the person's User: page.
- Check "what links here" before moving pages, and fix any links to the page you're moving, if appropriate.
If you see something which you think needs massive cleanup on the wiki, please point it out to admins on the irc channel or microformats-discuss list.
|
OPCFW_CODE
|
Some advice to people who wish to advocate Linux. Especially to those who want to criticise other operating systems in the process.
16 August 1996
Linux is my operating system of choice. I want to have many people use it, if only because then it has a higher chance of surviving. Many people want the same thing, and some of them are actively telling about Linux to other people, often in public newsgroups. This is advocacy, and it's one good way to spread the word on Linux.
Unfortunately, some of these people seem to suffer from what has been called the Amiga syndrome: whenever anyone discusses computers, the stereotypical Amiga user will always claim that the Amiga is a better, faster, cheaper, more user-friendly computer than any other, ever, and any opposing view is treated as treachery, oppression, and a declaration of nuclear war. Some Linux users are using the same tactics. They make both themselves and Linux look bad. I'd like to stop this by making a few suggestions for advocating Linux better.
There's no enforcement of these suggestions, nor will there be (the mere idea is horrible). Linux users are supposed to have brains. If you don't think I'm making sense, fine. One of us might consider the other an idiot, but that's life.
Stay calm. There's no reason to get excited. If someone says something about Linux that you don't like, so what? It's just computers, it's not important.
Don't take it personally. Even if Linux is your dream system, there's no reason to be offended if someone points out problems with Linux (even if you wrote that part of Linux, which you probably didn't). It's not a statement about you personally. If they flame Linux users, they're idiots and you should ignore them. They're probably just trying to get some attention.
Ignore flame baits. Like I said, some people just want attention. They enjoy starting long flame wars by crossposting something insulting to several unrelated groups (e.g., both to Windows and Linux groups). Don't respond to these posts. It isn't productive.
Stick to facts. If someone says something wrong about Linux, reply with the correct facts. Make sure they're facts, though, not just something you heard about. Don't spread lies or rumors. Check your facts. If you don't know how to do that, then perhaps you shouldn't take part in the discussion, except perhaps by making questions. Even better, give references so that other people can also check the facts.
Linux is not flawless. Linux has bugs, including design problems. If someone points out something that is wrong with Linux, acknowledge it and do something constructive, like forward it to the proper maintainer or fix it yourself. Find a workaround. Write a summary of the problem and make it publically available. Don't just whine.
Don't flame other systems. Perhaps Windows does crash more often than Linux (although I have no hard data on this, just anecdotes, so I don't know if it is true; remember, facts only). That doesn't mean you tell it to every Windows user. If you must say something about other systems, keep to facts (and make doubly sure they're facts) and present them politely.
Don't flame people because they use other systems. Ever.
Bill Gates is not Satan. Some people claim that Microsoft's business practices are immoral (or at least overly predatory). I don't know if this is true, but using such claims as arguments does not make the discussion productive. Conspiracy theories sound really, really silly (as long as they're theories; feel free to provide verified facts).
We aren't taking over the world. There's no reason to get offended if someone claims many more people use Windows than Linux. It's true. It doesn't matter. No-one knows how many Linux users there are. That doesn't matter, either. Market share isn't the goal. Solving problems is the goal. Having fun is the goal.
Linux can't replace Windows. Windows has applications that Linux lacks. There's no reason to get excited about it. Windows can't replace Linux, either. No system is perfect for all things. Don't make yourself look ridiculous by claiming that LaTeX is a better wordprocessor for the masses than MS Word. If you want Linux to have better applications than Windows, write them or encourage others with something better than talk.
Avoid crossposts. Many advocacy discussions live long because they're crossposted to many popular groups for specific systems. Whenever someone says something about one system, there's a whole bunch of people who will jump on him, just because he's supporting a system different from their's. If you must crosspost advocacy discussions, only crosspost to advocacy groups (such as comp.os.linux.advocacy). Never, ever crosspost to other groups, it ruins them. If you respond to an advocacy thread that is crossposted to a non-advocacy group, remove the non-advocacy group.
Keep to the Linux groups. Don't go to non-Linux groups to pick a fight. Each advocacy group exists for discussion about one particular system. Don't try to invade other advocacy groups. That's rude. No-one likes big-mouthed strangers.
|
OPCFW_CODE
|
It took me many years to learn the fundamental principles of creating a game. Although I can not claim to have a complete grasp on the matter, I do believe that what I have learned would be useful for those wishing to enter the field, but are unsure of how. The Beginner may find themselves in a new world of immense complexity with which they have little to no experience, and many of the tutorials already existent seem to either cover a very specialised topic, or already assume that you have a general comprehension of the process of developing a game. However, my initial motivation to write this tutorial was perhaps not as grandiose. One day I sat down and realized that I was bored. After a dilemma of perhaps twenty painfully long seconds, I have decided to write this tutorial, outlining the basics of developing a game. This tutorial should help you understand how to create a game, or at least what to look for in order to learn what you need to do. Of course, this is not an in-depth tutorial, for such a task would require the composition of books as opposed to posts. But with luck it should prove to be useful for those of you who desire to take up the occupation of creating a visual and interactive manifestation of your ideas.
What you may not find in this tutorial is a description of how to create a 2D game. While I may add this information sometime in the future, at the moment I will rudely redirect you to perhaps read into the wonders of SFML and RenPy ^_^
The general intention is for you to use this tutorial as a starting point to launch your study. Step 3 in particular is most beneficial if you take your time to google the things mentioned, looking at them from different perspectives. Information on them is widely available - it may simply be difficult to know what to look for for those unfamiliar with the concepts.
Before we commence, I should note a few prerequisites. The main one is: you must be willing to learn. The journey of creating a game may be turbulent if pursued in haste, and you may save some time by ensuring that you are willing to embrace genuine criticism and advice. There are many young (and maybe even not-so-young) developers who enter the scene by immediately attempting to organise a team on some forum, often with the subconscious goal of basically having others work on what they feel is a good idea they have. Indeed, I was once such a person. I was rejected on many occasions, and when I was not the team merely fell apart within days. In retrospect, the forum posts I made at the time may be some of the most embarrassing things I have ever produced. Such behaviour should be avoided. Instead, one should begin by learning on their own, and perhaps making a small game or two. Otherwise, one can hardly claim to have the experience to be taken seriously. I am merely warning you so that you would not make the same mistake as I did. If this was indeed your approach, the following point may prove illuminating.
Ensure that you are not afraid of learning, especially the more technical aspects of development. Programming may seem a daunting thing to learn. It would take months, so is it worth it? This may apply to a variety of topics (in this case, my opinion would be "yes"). However, learning these things is often quite fun if you are patient, and honourable goals in their own right. Patience is key to game development, which can often be a time-consuming and error-prone activity. Not unlike more traditional art, truly.
Anywho, the more technical prerequisites would include an internet connection, and the ability to perhaps download at least a few hundred megabytes of material. The desired power of the computer would mainly depend on the type of game you wish to create. I personally have managed to write a pong clone on my Android tablet in under a few days, but I would certainly not be able to boast the same for a game with graphics rivalling GTA V, which would require a modern desktop machine to create. Aside from that, your most useful tools would be patience, and persistence.
If you have any questions or comments, I would be interested to see them and perhaps modify this text in accordance to the new information/request.
Now that we are done with the tiresome introduction, we may begin ^_^
|
OPCFW_CODE
|
Microsoft Office 2010 Professional Plus Product Key working Product key. Microsoft Office 2010 Product Key Generator is efficient software that integrates office tools for experts. Thus this is the time when you need genuine activation keys and you can activate the software. Guide How do I activate my Microsoft Office 2010? You have to use and enter the correct and working Office keys. Across the several templates, a user can add logo graphics and business information in real time. Please check activation status again. However, the software has a more elegant feel, makes it quite easy to tuck out, and collapse.Next
Thirdly, a detailed guide to how a third- party app can retrieve lost keys. Microsoft Office 2010 Product Key Generator allows you to generate the product key. Follow the steps below and you will be able to completely activate your Office. Microsoft Office 2010 Product Key generator has the simple user interface with additional features. After activating your version of Microsoft office you can avail full features of any of office 2010 applications including Microsoft word 2010, Excel, Outlook and PowerPoint.Next
Even you can create tables, texts, graphics and other data according to your own choice by using its newest styles and fonts as well. Both 32-bit and 64-bit client application are supported by Office 2010. Office is certainly the best Office suite out there. This program offers the finest choice to make it a beautiful interface which created inside it. Additional Microsoft Programs In addition to popular programs from Microsoft Office, there are other programs designed to help with more specialized work.Next
It is still pretty expensive for the average consumer, as popular as it is. When the wizard asks you to enter the key, you simply need to follow the steps below: Step 1. One of the goods of Microsoft is that the Microsoft Office Professional 2010 crack. Microsoft Office 2010 Pro Product Key supplies over that. If you run a business, you can opt for an Office software package licensed for business use, which includes these programs as well as Outlook®. Microsoft office 2010 product key generator hold advanced feature; you can share data easily.Next
Excel is used for the production of spreadsheets where information could be added. Choose any one of the activation key or product key to activate your Microsoft Office 2010. It is literally easy to manage and Activate by per the Internet the activation virtuoso automatically. Well, now you have access to all office tools there. That will save your time and reduce the chances of error. It is then stated that Office 2010 is the successor to Microsoft Office 2007.Next
You can make your text, tables, graphics and even entire document attractive and acording to your own choice by using its novel and newly introduced styles and themes. Menu categories are represented just like tabs in the web browser. The option of database manager Access also has been upgraded to many product key improvements to collect and organize the data in a more significant way, which is being followed by their users. If you are the owner of a company or you are the employee in any organization, Microsoft Office 2010 Product Key permits you to do your job fine and provides many facilities like as auto saving data. If you simply fail to use the genuine key, during installation, then you will simply fail to install your copy of the office. As previous versions, it is much better in performance. How to activate Microsoft Office 2010 using Product Key? Office 2010 Professional Plus Product Key includes Word, Excel, OneNote, PowerPoint, Outlook, Publisher, Outlook with Business Contact Manager, Communicator, access, SharePoint, and InfoPath.Next
Microsoft Office 2010 Product Key Generator. The updated ribbon interface on the Microsoft Word 2010 in this regards makes it easy to find features such as colors, fonts, bullets, and outlines. There are a lot of new fonts that user can set to enhance the look if your data. Microsoft Office 2010 Product Key Generator is right tool given to you there for free. Whether you are looking for the entire Microsoft Office Suite or extra programs like Publisher or Access, we carry a variety of software options to get the right software for your home or business. Well, this Article here covers all the keys that will make your activation of Ms Office. Microsoft Office 2010 Product Key activate all Office functions.
Microsoft Outlook is the information manager for a user that is often used as an email application. Microsoft Office 2010 Product Key Free for You Microsoft Office Key: Well, you need to have a genuine Microsoft Office Product key, in order to make the copy of your software licensed. The Outlook program is a platform that bridges the gap between the pc and email. Since its beginning into the general public, lots of individuals have known the stability and reliability of the software concerning usage. The suite realizes the requirements of professional plus educational as well as business concerns. Students can find programs that meet their needs, or you can buy individual programs instead of the entire Office Suite.
|
OPCFW_CODE
|
Let’s say you buy a coffee machine. You go home, plug it in and it does what it’s supposed to do – make coffee. Depending on how easy it is to use, how well the coffee tastes, etc … you experience a certain level of satisfaction. Now suppose one day the coffee machine suddenly stops working. Obviously, your satisfaction will drop sharply. You call customer service and after a lot of try this and try that, all eating up your precious time, it works again. Your satisfaction will rise again, but not to the initial level. You might even decide that next time you’ll not buy from this company again, they don’t seem to be able to provide functioning machines.
This is how the story usually goes. But there’s a very interesting paradox that can lead to a surprising outcome. Given certain conditions (see below) and an exceptional customer service, studies have shown that after the failure the level of satisfaction can rise above the initial level. In other words: customers who have experienced a problem with the product and have been successfully helped by the manufacturer’s customer service can be more satisfied with the company than those customers who have not experienced any problem at all. This is called the service recovery paradox. A widely cited work regarding this paradox by Hart et al. (1990) in the Harvard Business Review states: “A good recovery can turn angry, frustrated customers into loyal ones. It can, in fact, create more goodwill than if things had gone smoothly in the first place”.
I made a crude graph to visualize this situation. Note that in a standard recovery the level of satisfaction rises, but not beyond the initial value. This is the situation we usually experience. A paradoxical recovery propels the level of satisfaction past this initial value.
There are some conditions that need to be met in order for the paradox to be able to occur.
The effect of the severity of the failure
According to McCollough et al. (2000), satisfaction varies with the severity of the failure. Many service problems that customers experience are only mildly annoying, while others can be very severe. Hoffman et al. (1995) state that the higher the severity of the failure, the lower the level of customer satisfaction. Consequently, the existence of a recovery paradox depends on the magnitude of the failure. For example, perhaps an apology, empathy, and compensation could create a paradoxical satisfaction increase after a 20-minute wait at the front desk of a hotel. But would this paradoxical increase occur if the wait caused the guest to miss a flight? It is unlikely that any realistic recovery is capable of completely erasing the harm caused by such a severe failure.
In the event of a service failure, a recovery paradox is more likely to occur if the service failure is less severe than if the failure is more severe.
The effect of a prior failure
A person’s satisfaction is a cumulative evaluation of all experiences with the firm (Cronin and Taylor, 1994). If the service failure occurred in a one-time only use, then the satisfaction judgment would be transaction-specific. However, an individual generally has a history of interactions with the firm, in which case satisfaction reflects the cumulative interactions over time between the individual and that firm (Bitner and Hubbert, 1994; Crosby and Stephens, 1987).
In the event of a service failure, a recovery paradox is more likely to occur if it is the firm’s first failure with the customer.
The effect of the cause of the failure
Service failures with persistant causes are more likely to repeat than failures with temporary causes. For example, when a hotel guest is assigned to an incorrect room category due to an outdated computer system, this could be considered a failure with a persistent cause. On the other hand, if the guest’s room assignment was botched because the front desk associate is in the initial stages of training, this could be viewed as an temporary cause. Customers are likely to be more forgiving of failures with temporary causes (Kelley et al., 1993). This is because the likelihood of a future inconvenience is minimal. Thus:
In the event of a service failure, a recovery paradox is more likely to occur if the customer perceives that the failure had a temporary cause.
The effect of perceived control
A service failure is any situation where something goes wrong, irrespective of responsibility (Palmer et al., 2000). Nevertheless, “the perceived reason for a product’s failure influences how a consumer responds” (Folkes, 1984, p. 398). Customers are more forgiving if they perceive that the firm had little control over the occurrence of the failure (Maxham and Netemeyer, 2002). Conversely, customers are less forgiving when they feel that the failure was foreseeable and should have been prevented (Folkes, 1984). For instance, did a wait occur because of a random spike in demand, or did it occur because the firm did a poor job in forecasting, planning or staffing? A bank customer may be understanding of a wait inside a bank lobby if there is an unexpected inflow of customers during a typically slow hour. On the other hand, the same customer may be less understanding if there is only one teller working during lunch hour on a Friday afternoon. Thus:
In the event of a service failure, a recovery paradox is more likely to occur if the customer perceives that the firm had little control over the cause of the failure.
For more on customer service, check out the 7 Laws of Customer Service.
|
OPCFW_CODE
|
Interesting Topic !!! Yeah ...
So if you haven't read my previous topics do read here on my profile so that we remain on same page.
I will try to tell you things in a very simplest way. Assuming you are a high-school student and doesn't know anything about the subject. Hehe... Don't worry, I am also in your class ;)
Do not skip my posts in this series else you will think at end: what we are talking about?
So lets start,
Your First Question : What all the fuss is about? Why we need clouds when we have good lands to walk?
Understand like this. Suppose, Your father gifted you a computer with Windows OS (16 GB Ram and 4TB HDD) on your 14 th Birthday. Two of your best buddies from neighborhood comes to you and request you to share some resources from your robust computer. You agree for 10 bucks for fixed resources like 2 GB RAM and 200 HDD sharing with each of your friends by a thin-client/LAN wire and they also started running Windows OS.
Then, one day three more friends come and ask you to share your CPU resources with them but they don't want windows , they want Linux.
How to do this? Then your big brother comes as savior, telling, he has two pieces of software - called Hyper-Visor-1 and Hyper-Visor-2.
Hypervisor -1 you have to install in main PC otherwise, if you are not ready , your friends can install hyper-visor 2 in their PCs and able to use whatever the OS they want.
You as a lazy but clever chap, chosen hypervisor-2 option , that is , software get installed by your friends as you don't need to make changes in your main CPU. from time to time.
You and brother started earning by renting more and more resources to friends and asking them to use hypervisors-2 .Now you have got another 20 friends sharing your Ram and HDD. Day by Day your friends are converting as customers... good days... ;)
Everything, was going good till One day they all started complaining about slow computers. Further, adding to this situation, your big brother wanted to add more 15 friends in your thin client CPU. Ya, I know your big brother has habit of poking nose in severe situations...
Now, this is the first time you are under real pressure from customers and eagerly searching for a solution.Since, every one is behind you including your brother; you decided to leave your home and go to house of your aunt for few days - searching peace. Going by road (on land) is difficult and time taking so you decided to take airplane (in clouds).
On Airport, you noticed that same air-strip is being used for landing by different airplanes.
You got an Idea,: The Idea of Resource Pooling.
You called this Idea as Cloud. Instead of running from the situation, you come back to your house and tell every friends that you are going to guarantee 2 GB RAM and 200 GB of HDD as earlier but this time you are not going for fixing it strictly as you were doing earlier in your CPU but You will pool resources now. You also tells them, with this new pooling framework, now friends will be benefited as they will pay what they use and can demand more resources when they need.
Now , you have to search where is your big brother!!! why?Dont forget he has another piece of software Hypervisor-1. Because, he will install in your CPU and will give more flexibility to you in business....
This is how 'cloud' come into this world. Thanks to you.
One day your brother said... Hey !!! you little master, Airplanes gave you Super Idea of Clouds... do Father Car driven by me, Mother usage of Uber Taxis and your School Bus doesn't worthy of any idea? The story continues in your mind Hehe.... :)
Okay here is something More...
1. Most Costly Solution: Dedicated Servers
I believe you know, what is bare metal server is? In simplest form, you can assume it as CPU of your desktop computer. They have RAM, cores, processors and Hard Drives. They are connect to world by Internet. A Dedicated Server provider sells you his servers within his premises and control. He can come to you with different options: He can lend you his servers or may ask you to purchase server on your behalf and setup in his premises & degree of control( called Co-location servers).
2. Some What Costly Solution: Virtual Private Servers- VPS
Dedicated Servers comes in various composition. The company having large composition Dedicated Server, will ask you to give you a part of it. You will be benefited by enjoying the service of VPS as Dedicated Servers.
Everything was going good when two decades earlier hypervisors made entry to the party. They started allowing different OS -( Linux, Ubuntu, windows, debian etc) on same machine at the same time. By 2005, CPU manufactures also accepted virtualization. And almost all CPU today can virtualize the different OS. To see whether your desktop CPU virtualization is enabled --> goto Bios (when pc booting up)--> Advance--> Advance--> CPU Settings---> Virtualization. An Example of hypervisor is VirtualBOX by Oracle.
Different types of Hypervisors:
An Example of Type 2 Hypervisor is well known VirtualBOX. Because we install it on windows operating system (OS) and ask it to virtualize to Linux OS .
Type 1 hypervisors examples are Hyper-V, ESX/ESXi, Xen, XenServer, KVM and OpenVZ.
3. A cheaper Solution than VPS: Cloud Servers / Computing
Most people tend to use cloud hosting and VPS hosting interchangeably. But in reality, they are quite different from each other.
Virtual Private Server, or commonly known as VPS, is a hosting solution allowing multiple servers to work as one server.
On the other hand, cloud hosting provides the same functionality like VPS but with on demand resources.
cloud offers similar functionality but the services are billed on usage. Cloud VPS hosting will instantly scale on the resources you use. These are monitored by CPU usage, bandwidth, and memory utilization. But a strong asset of clouding VPS hosting is that the resources are always available as they are distributed to many machines. This makes expansion quite flexible and easy. However, the user will have less control on achieving and maintaining expansion, as your data can be spread across many cloud machines.
Different types of Clouds:
SAAS: SOFTWARE AS A SERVICE
PAAS: PLATFORM AS A SERVICE
IAAS: INFRASTRUCTURE AS A SERVICE
Common Examples of SaaS, PaaS, & IaaS:
SaaS:Google Apps, Dropbox, Salesforce, Cisco WebEx, Concur, GoToMeeting
PaaS:AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine, Apache Stratos, OpenShift
IaaS:DigitalOcean, Linode, Rackspace, Amazon Web Services (AWS), Cisco Metapod, Microsoft Azure, Google Compute Engine (GCE)
Your Answer to your brother about cars:
In this Series we will build together Iaas: Services like DigitalOcean, Linnode, google cloud etc.
DON'T FORGET TO LIKE, TAG & FOLLOW ME.
To be continued
|
OPCFW_CODE
|
¶ 1 Leave a comment on paragraph 1 0 Posting original blog post, both edited and amended to include the system description in the “Proposed Approach” section, the “Environmental Scan” section, and the appendix.
¶ 5 Leave a comment on paragraph 5 0 As the digital age rapidly advances and facilitates the evolution of sharing scholarship and academic discussion, online editing has yet to be perfected. It remains in development through various platforms that seem to neglect a means of sustainability. However, this isn’t intentional abandonment, it is a lack of a developed platform to facilitate user interaction and growth. As the humanities grow to incorporate digital methodologies, simple editing and review at the discretion of other scholars in its various disciplines require a new platform. This project will construct and establish a new approach to online editing – one that accounts for previous attempts at the formula while accounting for needs across more than a single institution.
¶ 6 Leave a comment on paragraph 6 0 This project looks to challenge the voluntary model of paper editing and provide a new platform in which editing is incentivized, while retaining academic merit. This project looks to eliminate the need for pay to edit services, and it provides remote academics, busy academics, or even scholars with disabilities to have their work edited by credible sources without the need for transit to an institution. It doesn’t completely eliminate institutional investment, but rather acts as an alternative potentially in conjunction with the mentioned method.
¶ 7 Leave a comment on paragraph 7 0 The humanities at its core stand to be representative of the entire human experience. As the humanities trends toward technology, it’s important that technology is utilized to the largest degree in accordance with the humanities. New methods of facilitating not only the humanities’ core discussions but its basic building blocks of papers and editing is essential, as it is the true foundation of the entire experience.
¶ 9 Leave a comment on paragraph 9 0 A plethora of paid services currently exist on the web to facilitate the editing of academic papers, theses, and dissertations. Of course, most institutions offer resource centers featuring these services, but a remote academic may not necessarily be able to access them. The aforementioned web services are typically expensive, and some cannot even guarantee that the work will be viewed by an academic in a field relevant to your paper. This work isn’t looking to necessarily replace university editing centers, but rather act as a supplement to the process through facilitating remote discussions and editing exchanges. A platform currently does not exist in which academics can submit their work for editing in exchange for their own editorial services, which this project aims to provide.
¶ 10 Leave a comment on paragraph 10 0 Systems that currently look to achieve this goal operate under a gift economy. The problem of a system that operates under a gift economy is its sustainability due to required participation in a user to user relationship. This gift economy system might seem generous initially, but the rate at which feedback is generated might drop, debilitating the platform in the process as an open tool. In facilitated settings, these platforms might have success, but for natural individualized feedback outside of the classroom, there must exist a degree of incentive. What this project looks to promote is a barter economy across all of the various disciplines in the humanities. Editing would remain within a particular discipline, but the platform would incorporate spaces for papers from any field of study in the humanities to create an open accessibility environment. This platform’s goal is not to replace the peer review system, but rather facilitate stylistic and structural critiques of academic writing across the humanities by fellow academics.
¶ 12 Leave a comment on paragraph 12 0 The development of this project stemmed from an evaluation of editing services available to academics and the discussion of the future of peer review. One of the many concerns of aspiring academics is the existence of a place in which their work can be scrutinized for further review, and is convenient to suit their schedule and the potential recipients of their writing. This platform offers a unique approach to the traditional editing process through digital means.
¶ 13 Leave a comment on paragraph 13 0 This project looks to challenge the pay-for-edit services that exist and build upon already existing editing platforms. Social Paper and The Public Philosophy Journal facilitate academic discussion and editing through their respective platforms. However, they are both proprietary and limited in functionality when searching for an editing platform with consistency and accessibility.
¶ 14 Leave a comment on paragraph 14 0 Social Paper was one of the first ambitious examples of a facilitated online editing platform. It used The Graduate Center’s Academic Commons system, allowed public papers to be edited by the entire Commons user base, private papers to be edited by specific users or groups, and a modified version of CommentPress for feedback. The system is helpful for classes or groups that encourage a level of participation and feedback but doesn’t assist scholars outside of that structure. Also, the platform is contained within the Commons system, creating a degree of difficulty for any scholar not integrated within it.
¶ 15 Leave a comment on paragraph 15 0 Similar to Social Paper, The Public Philosophy Journal is a system designed to provide feedback and inspire discussions by allowing an open forum for ongoing projects. The project operates under a “current” system in which content is curated and originates on other websites. On items within this “current,” users can comment on, rate, and mark the piece as a “must read.” However, this space looks less toward editing and more toward general feedback. Another pitfall is a lack of user information or developed profiles, leaving anyone open to critique from anyone on the internet. It is also a proprietary platform geared towards philosophy, which closes the door to any other humanities discipline.
¶ 16 Leave a comment on paragraph 16 0 Edit Swap looks to take the basis of those two systems, create a new shared economy to keep the platform populated and open it up to any scholar in the humanities disciplines. It will use the same backend as Social Paper to allow users to make edits while containing it within a user to user incentivized relationship. Hopefully, through the incentivized method provided, user activity will continually increase, furthering both academia through editing and academic discussion in response to a plethora of topics in the content of submitted papers.
¶ 18 Leave a comment on paragraph 18 0 Users will initially sign up for the platform by providing academic credentials and affiliations. The credentials will exist as a part of their profile and they cannot sign up for the platform without developing a profile inclusive to their accolades, associated institutions, and publish works. The profile will be viewable through a searchable index and will be featured when looking to request edits, or when looking to edit a paper. Their profile will also feature feedback from other users that they are involved with in regards to either editing or requesting edits.
¶ 19 Leave a comment on paragraph 19 0 The work involves the balance between two roles of an academic in relation to editing: a requestor and an editor. The currency system within the platform operates through a credit system built on editing papers. Every page submitted by a requestor within a paper will require a credit while each page edited by an editor will gain a credit if accepted. A requestor can add the number of requested editors for a paper based on the amount of credits they possess. For example, a requestor with twenty credits can request two editors for a ten-page paper.
¶ 20 Leave a comment on paragraph 20 0 Page count will be held to certain word-count standards by the platform to ensure that no user attempts to place more for editing than allowable. The paper will be set with a deadline window, and once edited, the requestor will be able to accept or reject the edits. If accepted, the editor will receive the proper amount of credits, but if rejected, the editor must edit again to either be accepted for full credit, or rejected for partial credit. The administration of the platform will assume no liability for the decisions of the requestors, but a feedback system will factor into the decision of whether or not an editor would like to work with a requestor and vice versa.
¶ 23 Leave a comment on paragraph 23 0 The project manager will be responsible for coordinating with the team to ensure that the project is delivered in accordance with the provided grant. The manager will also set dates and timelines not just for the development of the project, but the testing and execution of it as well.
¶ 25 Leave a comment on paragraph 25 0 The social media outreach coordinator will be directly in charge of the promotion of the final product through a variety of outlets, as well as be responsible for answering questio
¶ 26 Leave a comment on paragraph 26 0 ns about the development of the platform. The social media outreach coordinator will be familiar with Facebook and Twitter ad campaigns, and also has knowledge of Google AdWords.
¶ 28 Leave a comment on paragraph 28 0 The WordPress developer will be well versed in HTML5, CSS3, PHP, MySQL, PHP, jQuery and Ruby. The WordPress developer will be in charge of setting up the initial user system, and backend of the WordPress. The developer will also be in charge of establishing the database to host not only user connections but the paper maintenance system as well. The developer will use jQuery to modify existing plugins to tailor them to the needs of the project site.
¶ 30 Leave a comment on paragraph 30 0 The graphic designer will be in charge of developing the front end of the website inclusive to logos, branding, and themes. The graphic designer will know how to operate Photoshop, Illustrator, Quark, Dreamweaver and HTML specifically to coordinate with the WordPress developer.
|
OPCFW_CODE
|
Installs base drivers, Intel® PROSet/Wireless Software for Windows Device Manager*, advanced networking services for teaming and VLANs (ANS), and SNMP for Intel® Network Adapters for Windows 8*. Not sure if this is the right driver or software for your component?
Check the Virtual Wireless Adapter IPv4 Properties: (hosting computer) The IP should read 192.168.XYZ.1 and the Mask should be 255.255.255.0; Few things to note. The name of the adapter for the Microsoft Virtual Miniport Adapter may be “Wireless Network Connection XY.” Where XY usually is ‘2,’ but maybe anything – mine is ‘9’. My Surface Pro tablet has upgraded to the latest version of Windows 10 Pro. The system install a strange driver and enable Microsoft Wi-Fi Direct Virtual Adapter along with my wifi card. I uninstall it from device manager and then it appears without any permission. Note: This article applies to the situation that failed to install the adapter through .EXE program and the adapter has .inf file to download. 1. Computer must already be connected to Internet using internal WiFi, then plug in AWUS036H adapter 2. Press Windows key + X key on Windows 8 desktop screen and select "Device Manager" from the popup menu 3. Under Network Adapters, click this device once to highlight, then right-click and select Update Driver Software 4.
Cara Mengaktifkan Wi Fi Virtual di Windows. Menggunakan aplikasi tersembunyi di Windows, Anda dapat mengubah laptop atau komputer menjadi hotspot jaringan nirkabel.
Jul 27, 2014 · If you have a problems with windows 8.1 Broadcom wireless adapter Like limited internet or unlimited slow loading Or if you unable to connect to 2.4 wireless router Just follow my steps and you'll How do I uninstall Virtual WiFi Router in Windows Vista / Windows 7 / Windows 8? Click "Start" Click on "Control Panel" Under Programs click the Uninstall a Program link. Select "Virtual WiFi Router" and right click, then select Uninstall/Change. Click "Yes" to confirm the uninstallation. How do I uninstall Virtual WiFi Router in Windows XP The Get Win 10 app tell me that my Broadcom Virtual Wireless Adapter will no support Windows 10 and I may not be able to connect to the internet and I need to upgrade the driver. I have the most recent driver which is version 184.108.40.206. This is for a Dell Studio 1557 with a Dell Wireless 1520 Wireless -N LAN card. Drivers Installer for Microsoft Wi-Fi Direct Virtual Adapter. If you don’t want to waste time on hunting after the needed driver for your PC, feel free to use a dedicated self-acting installer. It will select only qualified and updated drivers for all hardware parts all alone. To download SCI Drivers Installer, follow this link.
|
OPCFW_CODE
|
What are FlatBuffers used for?
What is FlatBuffer file?
Game developers, we’ve just released FlatBuffers, a C++ serialization library that allows you to read data without unpacking or allocating additional memory, as an open source project. We have provided methods to build the FlatBuffers library, example applications, and unit tests for Android, Linux, OSX and Windows.
How do you use FlatBuffer in Python?
There is support for both reading and writing FlatBuffers in Python. To use FlatBuffers in your own code, first generate Python classes from your schema with the –python option to flatc . Then you can include both FlatBuffers and the generated code to read or write a FlatBuffer.
Should I use FlatBuffers?
Flatbuffers should only be used for cases where the object is large and you normally need to extract only one or two entities out of it. This is because the code for making a flatbuffer object is much more than that needed for protobuf and JSON.
How do you install FlatBuffers?
- choice “folder for installation”
- cd “folder for installation”
- cd flatbuffers.
- cmake -G “Unix Makefiles” (install cmake if need)
- sudo ln -s /full-path-to-flatbuffer/flatbuffers/flatc /usr/local/bin/flatc.
- chmod +x /full-path-to-flatbuffer/flatbuffers/flatc.
- run in any place as “flatc”
How do you use Flatbuffer in C++?
FlatBuffers supports both reading and writing FlatBuffers in C++. To use FlatBuffers in your code, first generate the C++ classes from your schema with the –cpp option to flatc . Then you can include both FlatBuffers and the generated code to read or write FlatBuffers.
Why is protobuf bad?
The main problem with protobuf for large files is that it doesn’t support random access. You’ll have to read the whole file, even if you only want to access a specific item. If your application will be reading the whole file to memory anyway, this is not an issue.
What is Protostuff?
Protostuff is the stuff that leverages google’s protobuf.
What is Flatcc?
The flatcc compiler is implemented as a standalone tool instead of extending Googles flatc compiler in order to have a pure portable C library implementation of the schema compiler that is designed to fail graciously on abusive input in long running processes.
What is FlatBuffers and who uses it?
Some notable users of FlatBuffers: Cocos2d-x, the popular free-software 2-D game programming library, uses FlatBuffers to serialize all of its game data. Facebook Android Client uses FlatBuffers for disk storage and communication with Facebook servers. The previously used JSON format was performing poorly.
How do I use flatc flatbuffer?
Use the flatc FlatBuffer compiler. Parse JSON files that conform to a schema into FlatBuffer binary files. Use the generated files in many of the supported languages (such as C++, Java, and more.) During this example, imagine that you are creating a game where the main character, the hero of the story, needs to slay some orc s.
Where can I find the FlatBuffers file format?
google.github.io/flatbuffers/. In computer programming, FlatBuffers is a free software library implementing a serialization format similar to Protocol Buffers, Thrift, Apache Avro, SBE, and Cap’n Proto, primarily written by Wouter van Oortmerssen and open-sourced by Google.
Why can’t I See Default values in flatbuffer?
The FlatBuffer binary representation does not explicitly encode default values, therefore they are not present in the resulting JSON unless you specify –defaults-json. If you intend to process the JSON with other tools, you may consider switching on –strict-json so that identifiers are quoted properly.
How do you use Flatbuffer in Python?
How do FlatBuffers work?
FlatBuffers is a statically typed system, meaning the user of a buffer needs to know what kind of buffer it is. FlatBuffers can of course be wrapped inside other containers where needed, or you can use its union feature to dynamically identify multiple possible sub-objects stored.
Who uses protocol buffers?
Protocol buffers are Google’s lingua franca for structured data. They’re used in RPC systems like gRPC and its Google-internal predecessor Stubby, for persistent storage of data in a variety of storage systems, and in areas ranging from data analysis pipelines to mobile clients.
What is Protobuf Python?
Protocol buffers (Protobuf) are a language-agnostic data serialization format developed by Google. Protobuf is great for the following reasons: Low data volume: Protobuf makes use of a binary format, which is more compact than other formats such as JSON.
Is protobuf a zero copy?
Protobuf can support zero-copy of strings and byte arrays embedded in the message. Cap’n Proto and FlatBuffers support zero-copy of the entire message structure. > Protobuf can support zero-copy of strings and byte arrays embedded in the message.
Why do we need protocol buffer?
Protocol Buffers (Protobuf) is a free and open-source cross-platform data format used to serialize structured data. It is useful in developing programs to communicate with each other over a network or for storing data.
Where can I find the code for FlatBuffers in Python?
This page is designed to cover the nuances of FlatBuffers usage, specific to Python. You should also have read the Building documentation to build flatc and should be familiar with Using the schema compiler and Writing a schema. The code for the FlatBuffers Python library can be found at flatbuffers/python/flatbuffers.
What is FlatBuffers in C++?
How to use FlatBuffers with PSR in PHP?
// It is recommended that your use PSR autoload when using FlatBuffers in PHP. // The last segment of the class name matches the file name. $root_dir = join (DIRECTORY_SEPARATOR, array (dirname (dirname (__FILE__)))); // `flatbuffers` root. // Contains the `*.php` files for the FlatBuffers library and the `flatc` generated files.
What are the advantages of using FlatBuffers?
The Flatbuffers python library also has support for accessing scalar vectors as numpy arrays. This can be orders of magnitude faster than iterating over the vector one element at a time, and is particularly useful when unpacking large nested flatbuffers.
|
OPCFW_CODE
|
Obtain my In-Complete Publication of Electronic Songs Suggestions, Idea on Patreon:
► Instagram: https://www.instagram.com/loopopmusic.
► Tiktok: https://www.tiktok.com/@loopopmusic.
► Facebook: https://www.facebook.com/loopopmusic.
► Twitter: http://www.twitter.com/loopopmusic.
► Web: https://loopopmusic.com.
Right here’s my MPC One tutorial:.
► Patreon: https://patreon.com/loopop
1:15 Key 61 vs MPC.
1:40 The tricks.
2:30 Touch strip.
8:45 Basic process.
11:05 Fabric vs XL.
11:40 Fabric vs Stage.
15:20 Arm vs combine.
19:35 MPC tutorial.
20:10 Fixed “disadvantages”.
21:50 Empty task.
22:35 Fabric XL.
25:20 Stage EP.
27:15 Kybd ctrl.
28:05 Long press.
28:35 Swipe food selection.
29:05 Pros, disadvantages.
36:15 Ambient noodle.
► Ziv (at) loopopmusic.com.
► Sweetwater: https://imp.i114863.net/doAPZQ.
► Thomann: https://www.thomann.de/intl/search_dir.html?sw=akai%20mpc%20key%2061&offid=1&affid=369.
► Perfect Circuit: https://link.perfectcircuit.com/t/v1/S0BERUlFSUBFR0NKR0VATEpITA?url=https%3A%2F%2Fwww.perfectcircuit.com%2Fcatalogsearch%2Fresult%2F%3Fq%3Dakai%2520mpc%2520key%252061.
► Bandcamp: https://loopop.bandcamp.com.
► Spotify: http://bit.ly/LoopopOnSpotify.
► Apple Music: http://bit.ly/LoopopOnAppleMusic.
Various other equipment in the video clip:.
► None – simply MPC Key, or MPC Keys as I in some cases call it.
Send testimonial and also video clip suggestions right here (sorry I do not supply 1×1 sessions/setup/purchasing guidance):.
Examine MPC Key 61 rates right here (associate web links aid the network despite what you acquire):.
KEEP IN MIND: Occasionally I’ll attempt out associate advertising and marketing as well as consist of associate web links. Without attending to the details of items revealed right here as they could be under NDA, equipment revealed on this network might be either sent out by the producer, on lending for evaluation or gotten at a price cut.
Various other locations I socialize:.
|
OPCFW_CODE
|
Use the InspectorA Unity window that displays information about the currently selected GameObject, Asset or Project Settings, alowing you to inspect and edit the values. More info
See in Glossary window to change the playable asset properties of a Control clip. To view the playable asset properties for a Control clip, select a Control clip in the TimelineGeneric term within Unity that refers to all features, windows, editors, and components related to creating, modifying, or reusing cut-scenes, cinematics, and game-play sequences. More info
See in Glossary window and expand Control Playable Asset in the Inspector window.
Use Source Game Object to select the GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary with the Particle SystemA component that simulates fluid entities such as liquids, clouds and flames by generating and animating large numbers of small 2D images in the scene. More info
See in Glossary, nested Timeline instanceRefers to the link between a Timeline Asset and the GameObjects that the Timeline Asset animates in the scene. You create a Timeline instance by associating a Timeline Asset to a GameObject through a Playable Director component. The Timeline instance is scene-based. More info
See in Glossary, or ITimeControl Script for the selected Control clip. Changing the Source Game Object changes what the Control clip controls.
Use PrefabAn asset type that allows you to store a GameObject complete with components and properties. The prefab acts as a template from which you can create new object instances in the scene. More info
See in Glossary to select a Prefab to instantiate when the Timeline instance plays in Play Mode. When a Prefab is selected, the label of the Source Game Object property changes to Parent Object.
When in Play Mode, the Prefab is instantiated as a child of the Parent Object. Although the Prefab is instantiated at the start of the Timeline instance, the Prefab is only activated during the Control clip. When the Control clip ends, the Prefab instance is deactivated.
Enable Control Activation to activate the Source Game Object while the Control clip plays. Disable this property to activate the Source Game Object during the entire Timeline instance.
The Control Activation property only affects Control clips that control a nested Timeline instance or a Particle System.
When Control Activation is enabled, use the Post Playback property to set the activation state for the nested Timeline instance when the main Timeline stops playing. The Post Playback property only affects nested Timeline instances.
|Active||Activates the Source Game Object after the nested Timeline instance finishes playing.|
|Inactive||Deactivates the Source Game Object after the nested Timeline instance finishes playing.|
|Revert||Reverts the Source Game Object to its activation state before the nested Timeline instance began playing.|
Use the Advanced properties to select additional functionality based on whether the Control clip controls a Playable Director, Particle System, or ITimeControl Script. The Advanced properties do not apply to all Control clips.
|Control Playable Directors||Enable this property if the Source Game Object is attached to a Playable Director and you want the Control clip to control the nested Timeline instance associated with this Playable Director.|
|Control Particle Systems||Enable this property when the Control clip includes a Particle System. Set the value of the Random Seed property to create a unique, repeatable effect.|
|Control ITimeControl||Enable this property to control ITimeControl scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary on the Source GameObject. To use this feature, the Source Game Object must have a script that implements the ITimeControl interface.
|Control Children||Enable this property if the Source Game Object has a child GameObject with either a Playable Director, Particle System, or ITimeControl Script, and you want the Control clip to control this child component.
For example, if the Source Game Object is a GameObject that parents another GameObject with a Particle System, enable this property to make the Control clip control the Particle system on the child GameObject.
|
OPCFW_CODE
|
How can negative potential energy cause mass decrease?
The mass of a hydrogen is less than its constituent parts(proton/electron). The explanation given for this is the following: Youtube
For hydrogen, $m = m_{components} + m_{extra}$ where we can write $m_{extra} = E/c^2$ where $E$ is the PE and KE of the proton/electron interaction. It turns out while KE > 0, PE is much more < 0, so their sum becomes negative, so $m_{extra} < 0$.
What I have a difficulty to see is how PE can be negative. I know, if you treat that when proton and electron are infinity far away and we treat PE=0 for that case, then the more they get closer, PE decreases and becomes negative, but for our case, this seems to me no proof at all. I could just treat $PE=infinity$ when proton/electron are far away from each other and that way, in hydrogen, PE wouldn't be negative.
I just can't get how this proof is solid and how negative energy can exist in a way that it can decrease the mass of the hydrogen atom.
"I could just treat PE=infinity when proton/electron are far away..." No, you can't, since infinity is not a number. However, you certainly are welcome to set PE equal to any fixed number you would like when the proton/electron "are far away." (But you have to stick with the same convention going forward.) For example you could treat PE=42 Joules when the proton and electron are far away. The potential energy of the bound state would then be some number less than 42. (Just like it is some number less than zero in the usual treatment.)
I don't think this video explains it well. It's not "because PE can be negative". What's negative is the change in energy ("negative" change just means that it was released when the atom formed - it's about process direction). It's just that you can set the zero level so that you don't have to deal with differences.
In fact, there's a sense in which pretty much all Potential Energy is negative (for an attractive force).
But it's better to avoid thinking of positive and negative energy, and focus on thinking of higher and lower energy states, regardless of sign.
If you have a ball at the lip of a bowl-like valley and push it in, it will roll to the bottom, trading its potential energy for kinetic energy. If there are no losses due to friction or collisions, it will simply roll up the opposite side of the bowl, come all the way back, and roll away from the valley back onto flat ground. If, however, while inside the bowl the ball knocks into another ball and starts it moving, it has lost some of its energy by transferring it to the other ball, and the original ball will no longer have enough energy to escape. It will continue oscillating back and forth in the bowl forever.
This is exactly what happens to the electron and proton in your example. If a free electron and free proton meet without shedding any of their kinetic energy, they will simply bounce apart again and continue on their way. But sometimes when they meet, they release a photon that carries away some energy. Now the electron is trapped in the bowl so to speak of the proton's potential energy well, and the pair becomes an atom. This process is known as Photorecombination. The energy the electron had to lose via the photon to become trapped (and the same energy it would have to gain to break free again) is somewhat unhelpfully known as the binding energy of the hydrogen atom.
Now, because $E=mc^2$, the free electron and free proton have a combined energy (mass), which is higher than the total energy (mass) of the hydrogen atom when they are bound, because some energy was carried away by the photon. This is why 1 atom of hydrogen has lower mass than the free electron plus the free proton. There is no "negative energy" or "binding energy" at play. It is simply a lack of energy in the system in that state, compared to what it would have in a different state.
Note: there is a very subtle but important point here that I've glossed over – how is the rest energy of the free electron or proton defined in the first place? This is discussed very well here.
You can't just set PE = infinity when the electron and proton are far away because infinity is not a number. You need to set some finite number instead.
Let's say you take PE = 1 million when the electron and proton are far away. This works, but then you need to recalibrate everything, because it implies that an electron with 1 J of kinetic energy will not escape the proton (it will, since the ionization energy of hydrogen is only 13.6 eV). What you'd need to do is compare your 1 J of kinetic energy against the potential energy recalibrated so that the energy at infinity is the energy that's left when the electron escapes. That would make your hydrogen ground state have an energy of (1 million J - 13.6eV) joules. Which would still indicate you lose potential energy when the electron travels towards the proton from infinitely far away. Which would mean that the mass will decrease.
Setting PE = 0 at infinitely far away is a convention. You could set it to some other number, but it won't change the physics.
This comes down to the idea that if electron escapes nucleus in hydrogen, its PE is 0(by convention), but even if it escaped, that doesn't mean they're far away from each other, so potential energy of nucleus still must be attracting electron. but you say that we have PE = 0 for electron when it's infinitely far away. How can infinitely far away and escaping mean the same thing ? escaping doesn't mean they are separated now by infinitey. The distance is pretty much not big.
@Zaza the PE asymptotically approaches zero. Increasing the distance to an infinite value will only increase the PE to zero (from a negative value). That is not the same as the PE approaching infinity (or negative infinity). No matter what the distance is, the PE remains finite. As far as "escaping," you can set an arbitrary distance when the electron PE is at $-0.1$ eV, $-0.001$ eV, $-0.00000001$ eV (as you like), and say at that distance the electron has "basically escaped"
What I have a difficulty to see is how PE can be negative.
Recall how Potential Energy (PE, here symbolized by $U$) is defined.
A potential energy $U$ is defined implicitly in terms of a force $\vec F$ via the equation:
$$
\vec F(\vec x) = -\vec\nabla U(\vec x)\;. \tag{1}
$$
Clearly, the potential energy is not defined uniquely via Eq. (1). Any other potential energy function that only differs from the first by a constant will result in the same physics.
For example, let:
$$
V(\vec x) = U(\vec x) + C\;,
$$
where $C$ is a constant (independent of spatial position).
Then we also have:
$$
\vec F = -\vec \nabla V\;.
$$
Therefore, you are free to add an arbitrary constant to your definition of potential energy in order to make the math easier.
Clearly, you can choose to make the potential energy at any given point positive, or negative, or zero, by definition. Thus, the sign of the potential energy is not necessarily of any significance.
Recall that the electrostatic force, due to a fixed particle of charge $Q$ at the origin, on a test particle of charge $q$ at position $\vec x$ is given by:
$$
\vec F = \frac{Qq\vec x}{4\pi\epsilon_0|\vec x|^3}\;.
$$
You are free to rewrite this force field $\vec F$ in terms of a potential energy field $U$, where
$$
U(\vec x) = \frac{Qq}{4\pi\epsilon_0|\vec x|}\;,
$$
which has the convenient property of approaching zero as $|\vec x|$ approaches infinity.
However, if you so please, you can also write the potential energy as:
$$
V = \frac{Qq}{4\pi\epsilon_0|\vec x|} + C\;,
$$
where $C$ is any constant value you would like it to be.
OK, so let's say we hate negative numbers for some reason and we want to try and make sure we never have to see any negative numbers by imposing a large (and physically irrelevant) C value.
Let's take $C=1000000$ so we can write:
$$
V = \frac{Qq}{4\pi\epsilon_0|\vec x|} + 1000000
$$
Classically, the total energy of a proton/electron system when the particles are very far apart and at rest is:
$$
E_1 = m_p c^2 + m_e c^2 + 1000000\;.
$$
Classically, the total of a proton/electron system when the particles are a distance $d$ apart and at rest is:
$$
E_2 = m_p c^2 + m_e c^2 + 1000000 - \frac{|e|^2}{4\pi\epsilon_0 d}\;.
$$
Even though the sign of each individual energy ($E_1$ and $E_2$) might be positive, the energy difference is still negative, regardless of the value of C:
$$
E_2 - E_1 = - \frac{|e|^2}{4\pi\epsilon_0 d}
$$
Electromagnetic potential energy is the energy of the electromagnetic field, that is, $\int \frac12 (\mathbf E^2 + \mathbf B^2)$. Just like any other form of energy, it has (or is) mass. When you weigh a system, you are weighing its electromagnetic field along with everything else. When you pull oppositely-charged particles apart, the total energy in the electromagnetic field increases, so the system weighs more.
That's the principle, anyway. Making it work consistently is an open problem. For example, it's difficult to understand the tiny mass of the electron. The electric field of an electron integrated from a macroscopic distance to about 3 fm (called the classical electron radius) is already enough to account for all of the electron's mass. But 3 fm is a huge distance by electron standards; if the field really cut off at that distance, it would be noticeable in scattering experiments. Despite those problems, the electromagnetic field has to have mass according to everything we think we know about physics, so this basic picture of weighing the field must be to some extent correct.
Think about what happens when a proton and electron combine in a hydrogen plasma. What you see is optical emission.
Now, use the unfashionable definition of mass: relativistic mass, as defined by the famous $E=mc^2$. Relativistic mass is a conserved quantity. The optical emission has energy, and thus carries mass away. Therefore, the bound atom has less mass than the sum of the masses of its constituents. The photons have carried away some of the mass in the electromagnetic field of the proton and electron.
|
STACK_EXCHANGE
|
This month, we’re proposing 2 new scoring criteria for and 1 policy change based on our experience reviewing apps and feedback from the Blockstack community. Please leave comments on the relevant github issues.
When an app’s source code is available, it becomes easier for others to audit its behavior. It also makes it so that users can run their own copy of the app, reducing their dependence on the app developer.
We propose that developers self-declare their open source status at app mining submission time, provide a link to the source code and indicating the type of license it is under. App developers can choose to make their app’s source code visible but continue to hold exclusive rights to its use. Alternatively, they can license the code under a license that grants use of the code.
We define “source available” as meaning that a generalist developer should be able to run their own copy of the app in a “reasonable” amount of time and code should not be obscured. Developers should make a good faith effort to meet this standard.
We reserve the right to spot check developer claims and propose a whistleblower system as the enforcement mechanism. That is to say, we will award points based on the developer’s claimed status with random spot checks and encourage community members or peers to reach out during the audit period if they think an app claims open source status for which is not qualified.
We propose the following scoring:
- No source code available/Commercial or unclear license: 0
- Source available - Non-OSI approved license or commercial license: 1
- Source available - OSI-approved license or public domain: 4
Since the community is subsidizing app’s development, we feel it makes sense to award apps that return the favor by contributing their code back to the community through an OSI-approved license.
A dry run of this new open source criteria will be conducted during the app review period that begins on December 1, 2019 (November 2019 cohort).
Can’t Be Evil Sandbox
We introduced the Can’t Be Evil Sandbox late last month at the 2019 Blockstack Summit in San Francisco. Two weeks ago, we shipped the developer preview of our New Internet Extension which implements v1 of the Can’t Be Evil Sandbox.
We propose the following scoring:
3rd party resources
- Uses 3rd party resources: 0 points
- Does not use 3rd party resources: 1 point
3rd party resources are defined as any requests to app origins that are not
self origin as defined by Content Security Policy (CSP) specifications. Requests that fall under the CSP policy
connect-src are allowed for all origins and explicitly exempt from this run under v1 of the Can’t Be Evil Sandbox.
Opts-in to Can’t Be Evil Sandbox
- No: 0 points
- Yes: 1 point
Apps opt-in to the latest version of the Can’t Be Evil Sandbox by setting the
can't-be-evil header to
true. Opting in means that the New Internet Extension and other user agents that support the Can’t Be Evil Sandbox will enforce the rules instead of merely reporting violations.
A dry run of this new criteria will be conducted during the app review period that begins on December 1, 2019 (November 2019 cohort).
Redirects away from app origin policy
We’ve seen a number of apps that submit one app origin to app mining and redirect users to one or more origins when the user tries to use the app. Blockstack app security is centered around an app origin - when a user authorizes access to Gaia storage for an app, they’ve giving access to a particular origin. Redirecting users to different app origins without their permission is dangerous and confuses them as to which app they’re actually using.
We attempted to manually determine how many apps in app mining exhibit this behavior in the past couple months. In October 2019’s app mining session, we conducted a dry run and counted at least 27 apps that redirected users to other app origins besides the one that was submitted to app mining.
We also found that even when a user is proactively looking out for an origin change in the address bar, it was very easy to miss. Put another way, even well-educated users have trouble keeping track of where they are on the web.
We propose that going forward, our review of apps will conclude when the app navigates away from the submitted app origin (excluding any navigation to the Blockstack Browser/authenticator). we recommend this as a policy change instead of a scoring item.
There are two reasons for this. 1) it moves us closer to clearly defining what an app is - an app is some code distributed from a name. Right now that name is the app origin, in the future it will be a Blockstack name. Different name, different app. 2) Trying to treat this as a scoring item make is complicated: Testing manually is very challenging because the tester needs to pay very close attention to what’s in the address bar. Testing it automatically is also challenging because an extension performing the test needs to maintain state across arbitrary app origins and somehow determine that arbitrary collection of apps origins is part of one app.
This policy will take effect in the app review period that begins on December 1, 2019 (November 2019 cohort).
|
OPCFW_CODE
|
Archive for the ‘ SQL 2012 ’ Category
I ran into this bug when I restarted the SQL Server 2012 Tabular service. I only saw 4 of my databases. The application error log was not very helpful A ton of “An error occurred when loading the Model.” messages. The other message I saw was “The database cryptographic key could not be loaded”
This seems to be an issue with SSAS not closing the file properly. What I did to fix was to look at the details of the error and see what files caused the error. I had a ton of these errors but I was able to track the problem back to the first MSOLAP type error where I saw error “An error occurred when loading the [Table Name]” where table name was a table in one of my cubes.
I stopped SSIS service and then deleted E:\MSAS11.DETAB12\OLAP\Data\cust_hist.0.db folder and E:\MSAS11.DETAB12\OLAP\Data\cust_hist.0.xml file. I restarted SSIS and then restored from backup.
I did find a link to a fix http://support.microsoft.com/kb/2724881 but it was already installed. Another post http://www.sqlservercentral.com/Forums/Topic880079-147-1.aspx#bm880318 said it was because of changes to the services account but that does not seem to be the problem as I never have and service account is a admin.
Wondering if anyone else has found a fix for this…
I’m still getting used to it the days of many dimensions with cubes is going to migrate into tabular. I will be the first to admit its hard to let my SQL, Oracle.. data warehouses go. But using tabular you can process data and handle bad data using DAX. It lets your users use data. Its all free with Excel 2010. Download PowerPivot and see what you can do with you old data marts, data warehouses, SQL Databases Oracle Databases, Access, etc. You can use them all together and let your power users create dynamic PowerPivot sheets on their own. Using SharePoint 2010 you can look at usage and import into tabular if it the data needs to be updated on schedule. Multidimensional still has a purpose but with Tabular you can put it into memory and have the fastest results. Any small data marts I do as Tabular and huge one use Multidimensional and then tabular.
Wow, let all business units make their own cubes using PowerPivot and SSAS Tabular many worlds hopefully same data? No Microsoft did a great job, Our star schema cubes are good. We can pull data using Normal Excel Pivot Table and MDX from the cube itself We can also allow users access to the Multidimensional cube, datamart or relational database.
What we get with Tabular is the ability to empower the users to use data models as soon as OLTP changes are made. Using PowerPivot users can now work in excel and use SSAS, Oracle, SQL Server, MS Access, MYSQL… to pull in data into PowerPivot. They can create facts and measures using relational data models in PowerPivot.
Using Sharepoint 2010 users can share their PowerPivot Excel spreadsheet to other users. I can then convert the PowerPivot Spreadsheet to SQL 2012 Tabular and partition the data so each datasource is processed daily, hourly..
The great thing is I can now partition dimensions and facts.
I will follow up with real time dimension and fact processing
|
OPCFW_CODE
|
KEMP Technologies has been doing some pretty cool stuff as of late. Not only are they starting to publish code on GitHub, but they also recently released a Python SDK for deploying and administering some of their products.
In this post I’ll just run through the steps in getting setup to use the SDK and provide some example code as part of my initial exploration of the SDK.
I’m also working on a post which will contain a piece of code to fully deploy a LoadMaster (or 10 LoadMasters for that matter) for use by Skype for Business — but one step at a time.
If you don’t need help setting up the environment (you already have a preferred IDE, you know how to use pip, etc), then skip down to the examples section.
Also I want to point out that you are interested in playing with this but do not have a load balancer to play with, KEMP does offer a free version. You can get the free virtual LoadMaster here. It works on pretty much every platform (I’m using Hyper-V).
There are plenty of different ways to setup an environment for working with Python. You basically just need Python and the SDK. I’ll be going through the steps to setup my specific environment which is Visual Studio Code with the Python extension (on Windows).
There are two different ways to setup this environment.
Normal Way (Manual)
First off, download and install Visual Studio Code and Python:
Once VS Code and Python are installed you need to add the Python extension to VS Code. Open Code and click on the extensions icon at the bottom of the left navbar, and search for Python. Click Install on the one that is just called “Python”.
Once that’s done the ‘Install’ button will have turned into a blue ‘Reload’ button. Click that to re-launch code to enable the extension.
Lastly, we’ll need to install the actual SDK. Open Powershell from an admin prompt and run the command
pip install python_kemptech_api
Now we want to tell the IDE that we will be writing Python (that’s what this SDK is written in). Do that by looking at the very bottom write of Code and clicking where it says “Plain Text”. Now type Python into the search bar and hit enter.
Hip Way (Using Powershell PackageManagement)
The other way is to just install everything with Powershell. This requires the PackageManagement PS module (included in Windows 10) and requires the PS Execution Policy allows running remote scripts.
Installing VS Code
Install-Package -Name VisualStudioCode -ForceBootstrap -Force
Install-Package -Name python -ForceBootstrap -Force
Now you probably have to close out of your Powershell session and re-launch it so that both ‘code’ and ‘pip’ are in your path.
Installing Python Extension
code --install-extension donjayamanne.python
Installing the KEMP SDK
pip install python_kemptech_api
Now open VS Code and change the language to Python by clicking “Plain Text” and the very bottom right of VS Code and then typing Python into the search field.
Using the SDK
I’ll just show a couple of my own examples and then link to my GitHub page where you can see the rest of the snippets I have. At this point I have only just started playing with this, but soon I’ll be putting up code to fully deploy a LoadMaster from scratch.
from python_kemptech_api import LoadMaster, VirtualService # Add the connection properties LoadMaster_IP = "10.0.3.72" # Your LoadMaster’s administrative IP LoadMaster_User = "bal" # Your LoadMaster’s Login User LoadMaster_Password = "myPassword" LoadMaster_Port = "443" # Build the LoadMaster object lm = LoadMaster(LoadMaster_IP, LoadMaster_User, LoadMaster_Password, LoadMaster_Port) # Create 2 virtual services service1 = lm.create_virtual_service("10.0.3.84", port=443, protocol="tcp") service2 = lm.create_virtual_service("10.0.3.85", port=443, protocol="tcp") service1.save() service2.save() # Attach 1 real server to each of them real_server1 = service1.create_real_server("10.0.3.86", port=443) real_server2 = service2.create_real_server("10.0.3.86", port=4443) real_server1.save() real_server2.save() # Show the newly created virtual services services = lm.get_virtual_services() for item in services: print(item)
This is pretty straightforward. It’s connecting to the LoadMaster, building 2 virtual services, and adding the same real server to both of the services. Lastly it shows all of the virtual services on the LoadMaster.
These virtual services are not quite configured properly yet, but in my next post on the SDK I’ll be adding the final touches. This includes applying a template to the virtual service, choosing a certificate, and modifying the health checks.
Running the Code
Before you run the code against the LoadMaster, you need to first enable the API. You do this by logging into the device and going to Certificates & Security => Remote Access
Then check the box for “Enable API Interface”
You can either run the code from within VS Code using the integrated terminal or you can simply open Powershell and run the script by executing:
And if you check the LoadMaster the changes will be reflected immediately.
I’ve uploaded some additional examples to my GitHub page. Some of the examples include:
- Uploading a new template
- Adding a local user account
- Listing all templates, virtual services, and interfaces
|
OPCFW_CODE
|
Group by month and year in MySQL
Given a table with a timestamp on each row, how would you format the query to fit into this specific json object format.
I am trying to organize a json object into years / months.
json to base the query off:
{
"2009":["August","July","September"],
"2010":["January", "February", "October"]
}
Here is the query I have so far -
SELECT
MONTHNAME(t.summaryDateTime) as month, YEAR(t.summaryDateTime) as year
FROM
trading_summary t
GROUP BY MONTH(t.summaryDateTime) DESC";
The query is breaking down because it is (predictably) lumping together the different years.
GROUP BY YEAR(t.summaryDateTime), MONTH(t.summaryDateTime);
is what you want.
+1: Another alternative using DATE_FORMAT: DATE_FORMAT(t.summaryDateTime, '%Y-%m')
Further on this, FROM_UNIXTIME is needed if you have a UNIX timestamp in your DB DATE_FORMAT( FROM_UNIXTIME(t.summaryDateTime), '%Y-%m' )
Thanks @OMGPonies , with your syntax I'm able to perform conditional GROUP BY.
Performance... my test of GROUP BY YEAR(date), MONTH(date) DESC; (~450 ms) and GROUP BY DATE_FORMAT(date,'%Y-%m') (~850 ms) on a InnoDB table with > 300,000 entries showed the former (the marked answer to this question) took ~half the time as the latter.
GROUP BY DATE_FORMAT(summaryDateTime,'%Y-%m')
Welcome to StackOverflow! This is an old and already answered question. We'll be happy if you could contribute to more pressing questions too. You can find tips on to provide good answers here.
See my comment on the answer above... On large-ish tables this method takes twice as long.
I prefer
SELECT
MONTHNAME(t.summaryDateTime) as month, YEAR(t.summaryDateTime) as year
FROM
trading_summary t
GROUP BY EXTRACT(YEAR_MONTH FROM t.summaryDateTime);
I know this is an old question, but the following should work if you don't need the month name at the DB level:
SELECT EXTRACT(YEAR_MONTH FROM summaryDateTime) summary_year_month
FROM trading_summary
GROUP BY summary_year_month;
See EXTRACT function docs
You will probably find this to be better performing.. and if you are building a JSON object in the application layer, you can do the formatting/ordering as you run through the results.
N.B. I wasn't aware you could add DESC to a GROUP BY clause in MySQL, perhaps you are missing an ORDER BY clause:
SELECT EXTRACT(YEAR_MONTH FROM summaryDateTime) summary_year_month
FROM trading_summary
GROUP BY summary_year_month
ORDER BY summary_year_month DESC;
please note that the output from EXTRACT(YEAR_MONTH FROM summaryDateTime) is 201901
@EdgarOrtega Yep, but that should be enough to generate the desired JSON object in the original question using application logic. Assuming that some form of looping over the query result is already happening, my approach aims to get the minimum required information from the DB as simply and quickly as possible
You're right, that should be enough. My comment was only to indicate what the output from that function is like, maybe someone does not like that format.
@EdgarOrtega No worries, thanks, a worthy contribution!
SELECT MONTHNAME(t.summaryDateTime) as month, YEAR(t.summaryDateTime) as year
FROM trading_summary t
GROUP BY YEAR(t.summaryDateTime) DESC, MONTH(t.summaryDateTime) DESC
Should use DESC for both YEAR and Month to get correct order.
You must do something like this
SELECT onDay, id,
sum(pxLow)/count(*),sum(pxLow),count(`*`),
CONCAT(YEAR(onDay),"-",MONTH(onDay)) as sdate
FROM ... where stockParent_id =16120 group by sdate order by onDay
This is how I do it:
GROUP BY EXTRACT(YEAR_MONTH FROM t.summaryDateTime);
use EXTRACT function like this
mysql> SELECT EXTRACT(YEAR FROM '2009-07-02');
-> 2009
You cal also do this
SELECT SUM(amnt) `value`,DATE_FORMAT(dtrg,'%m-%y') AS label FROM rentpay GROUP BY YEAR(dtrg) DESC, MONTH(dtrg) DESC LIMIT 12
to order by year and month. Lets say you want to order from this year and this month all the way back to 12 month
You are grouping by month only, you have to add YEAR() to the group by
SELECT YEAR(t.summaryDateTime) as yr, GROUP_CONCAT(MONTHNAME(t.summaryDateTime)) AS month
FROM trading_summary t GROUP BY yr
Still you would need to process it in external script to get exactly the structure you're looking for.
For example use PHP's explode to create an array from list of month names and then use json_encode()
As this data is being pulled for a trade summary by month, I had to do the same thing and would just like to add the code I use. The data I have is saved in the database as a string so your query may be simpler. Either way, as a trader, this is in essence what most people would be looking for:
select
DATE(DATE_FORMAT(STR_TO_DATE(closeDate, '%Y-%m-%d'), '%Y-%m-01')) AS month_beginning,
COUNT(*) AS trades,
TRUNCATE(sum(profitLoss)/count(*),2) as 'avgProfit',
TRUNCATE(sum(percentGain)/count(*),2) as 'avgPercent',
sum(profitLoss) as 'gi',
sum(profitLoss > 0)/count(*) AS winPercent,
sum(profitLoss < 0)/count(*) as 'lossPercent',
max(profitLoss) as bigWinner,
min(profitLoss) as bigLoser
from tradeHistory
group by month_beginning
order by month_beginning DESC
However, it will skip months that are missing in your data. So if there is no data for Jan, there won't be a row.
I would like add difference between group by year(datetime),month(datetime) and groupb by extract(year_month from datetime).
here is the query i tried with these two cases.
select year(datetimecol) as Year,monthname(datetimecol) as Month from table
group by year(datetimecol) and month(datetimecol) order by year(datetimecol) desc;
result:
Year, Month
2020, May
select year( datetimecol) as Year,monthname(datetimecol) as Month from table
GROUP BY EXTRACT(YEAR_MONTH FROM datetimecol) order by year(datetimecol) desc,month(datetimecol) asc;
result:
Year, Month
2021, January
2021, February
2021, March
2021, April
2021, May
2020, May
2020, June
2020, July
2020, August
2020, September
2020, October
2020, November
2020, December
(this is the result i need)
My observation
1.when i am using with group by year(datetimecol),month(datetimecol) , not giving desired result ..might be because of datetime feild...
2.when i tired second query with GROUP BY EXTRACT(YEAR_MONTH FROM datetimecol) order by year(datetimecol)...its working absolutely fine.
in conclusion, for getting months by year wise use the following query.
select year( datetimecol) as Year,monthname(datetimecol) as Month from table
GROUP BY EXTRACT(YEAR_MONTH FROM datetimecol) order by year(datetimecol) desc,month(datetimecol) asc;
If you have a new question, please ask it by clicking the Ask Question button. Include a link to this question if it helps provide context. - From Review
i tried this context and given my observation.i didnt give without tested.
Use
GROUP BY year, month DESC";
Instead of
GROUP BY MONTH(t.summaryDateTime) DESC";
although it appears that GROUP BY MONTH(t.summaryDateTime) DESC doesn't order the months properly for some reason...
|
STACK_EXCHANGE
|
Simple Steps Enable BITS on DP Enable Checkbox for
- In the Configuration Manager console, navigate to System Center Configuration Manager / Site Database / Site Management / <site code> – <site name> / Site Settings / Site Systems, and then click the name of the server.
- To open the ConfigMgr Distribution Point Properties, in the results pane, right-click ConfigMgr distribution point, and then click Properties. On the General tab ensure Allow clients to transfer content from this distribution point using BITS, HTTP, and HTTPS (required for device clients and Internet-based clients). has been selected. Configuration Manager 2007 client computers that connect to a branch distribution point will run virtual application packages using SMB. Notes :–Important You must select Allow clients to transfer content from this distribution point using BITS, HTTP, and HTTPS (required for device clients and internet based clients). or Enable virtual application streaming option on the Virtual Applications tab will not be available.
- On the Virtual Applications tab, select Enable virtual application streaming.
- To close the distribution point properties, click OK.
Create package “” Install the Virtual Application Virtualization Desktop Client software
- In the Configuration Manager console, navigate to System Center Configuration Manager / Site Database / Computer Management / Software Distribution.
- If necessary, expand the Software Distribution node and select Packages. To open the Create Package from Definition Wizard, right-click Packages, and then click New / Package From Definition.
- On the welcome page, click Next.
- On the Package Definition page, to specify the publisher and definition for the new package, click Browse. Locate and select the AppVirtMgmtClient.sms file. The default location for the AppVirtMgmtClient.sms file is <ConfigMgrInstallationPath> Tools VirtualApp AppVirtMgmtClient.sms. The Name, Version, and Language associated with the specified .sms file are displayed in the Package definition pane. Click Next.
- On the Source Files page, select Always obtain files from a source directory to help ensure the latest version of the client software will be available, and then click Next.
- On the Source Directory page, specify the directory that contains the source files for the package. This is the directory that contains the Microsoft Application Virtualization Desktop Client or the Microsoft Application Virtualization for Terminal Services installation file depending on the version of the client you are planning to install. Specify the source location by providing the UNC path. Alternatively, click Browse to specify the location that contains the setup files for the type of client you want to install. Click Next.
- On the Summary page, review the Details for the package definition file. To create the package definition file and close the wizard, click Finish. To access the new package select the Packages node and the package will be available in the results pane.
- If you installed the Microsoft Application Virtualization for Terminal Services client, after the package has been created, you should select the Packages node, right-click the package in the in the Results pane and select Properties. On the General tab, update the Name of the package so that it reflects that it is the terminal services version of the client.
Enable Client Agent Settings:-
- In the Configuration Manager console, navigate to System Center Configuration Manager / Site Database / Site Management / <site code> – <site name> / Site Settings / Client Agents.
- Right-click Advertised Programs Client Agent, and then select Properties.
- On the General tab, to enable the client agent for running virtual applications, click Allow virtual application package advertisement.
- Click OK to exit the properties dialog box.
To work with clients you need below two software’s requirements The Microsoft Application Virtualization Desktop Client requires the following prerequisites be installed on the Configuration Manager 2007 client computer:
- Microsoft Application Error Reporting – The install program for this software is included in the Support folder in the self-extracting archive file.
- Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) – For more information about installing Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) https://go.microsoft.com/fwlink/?LinkId=116683
Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) Silent install This option will suppress all UI during installation.
Vcredist_x86.exe /q:a /c:”msiexec /i vcredist.msi /qn /l*v %temp%vcredist_x86.log”
You need to import the (.xml) virtual package then you can create package and advertisement
|
OPCFW_CODE
|
[Feature] Auto detection of flatpak discord
Before Requesting
[X] I found no existing issue matching my feature request
Describe the feature you'd like!
Could you auto detect the flatpak version of Discord, just like the normal version? It installs fine when manually selecting the directory ~/.var/app/com.discordapp.Discord/config/discord/0.0.x
Anything else?
Flatpak is not supported.
But it worked when I installed it, so....
Just because its not supported, doesn't mean it wont work. There's always a workaround.
From my understanding, only standard local installs are officially supported. This means that the installer expects the discord package to be located in a specific path, one that if I understand correctly is the same across all flavors of Linux, Assuming you install the official discord package with the official distro-specific package manager.
From my understanding, the Flatpak version is a repack of the discord app, designed to run in a sandbox. Thus, it is not the standard package that the installer expects.
I'm not saying I don't appreciate the added privacy and security this boasts. But I honestly feel like if you're really that paranoid about privacy, you shouldn't really use discord in the first place, as it still stores the data from your activity on the actual platform. Besides, application tracking and such can already be mitigated with the standard install, and the DoNotTrack plugin. Rendering the point of using betterdiscord on flatpak to basically none.
Just using the flatpak to have updates without having to reinstall the tar.gz. file. Anyway, I can close the issue if you want. Thanks for the information.
For flatpak and snap versions, it is recommended to use the betterdiscordctl tool rather than the official installer, as it is created more to accommodate those.
Sorry, I'll open it there
I'm not saying I don't appreciate the added privacy and security this boasts. But I honestly feel like if you're really that paranoid about privacy, you shouldn't really use discord in the first place, as it still stores the data from your activity on the actual platform.
People don't just use Flatpak for privacy. They are also very easy to install, require no password to update, and are widely accepted as the new norm. Completely ignoring Flatpak is pretty silly in the long term, and will really just result in people using the old and perhaps eventually outdated CLI betterdiscordctl project.
I am baffled as to why Flatpak is still not supported "officially".
I'm not saying I don't appreciate the added privacy and security this boasts. But I honestly feel like if you're really that paranoid about privacy, you shouldn't really use discord in the first place, as it still stores the data from your activity on the actual platform.
People don't just use Flatpak for privacy. They are also very easy to install, require no password to update, and are widely accepted as the new norm. Completely ignoring Flatpak is pretty silly in the long term, and will really just result in people using the old and perhaps eventually outdated CLI betterdiscordctl project. I am baffled as to why Flatpak is still not supported "officially".
I do not wanna turn this thread into a discussion about how to use linux. So I will say this, then leave it alone. But I disagree with this whole unification thing personally, as Linux is supposed to be flexible and extremely customizable. I do not like the idea of unifying the experience across the board as it removes space for flexibility and customizability. Rather, it promotes hardening. I fear that Linux will lose its ability to be truly unique per user. Which is why I stand against flatpak.
|
GITHUB_ARCHIVE
|
Virus detected warning when downloading javy-x86_64-windows-v4.0.0.gz
Virus detected warning when downloading javy-x86_64-windows-v4.0.0.gz
Issue description
When I attempted to download the file javy-x86_64-windows-v4.0.0.gz (version 4.0.0) on Windows, my antivirus software flagged it with the message "Virus detected" and blocked the download. This could discourage users from using the tool.
Steps to reproduce
Use a Windows machine with antivirus enabled (e.g., Windows Defender).
Download the file javy-x86_64-windows-v4.0.0.gz.
Expected behavior
The file should download without being flagged as a potential virus.
Actual behavior
The antivirus software blocks the download and displays a "Virus detected" warning.
Screenshot
Environment
OS: Windows 10 Pro
OS Version: 10.0.19041 Build 19041
Architecture: 64-bit
Antivirus: Windows Defender
AMProductVersion: 4.18.24090.11
AntispywareEnabled: True
RealTimeProtectionEnabled: True
Please investigate whether the file is safe or if there's an issue with how it's packaged.
Additional Note
I did not encounter this issue when downloading the file javy-x86_64-windows-v3.2.0.gz, it downloaded without being flagged by antivirus.
Thanks for reaching out! I just gave this a try on my Windows 10 computer running the same version of Windows Defender (4.18.24090.11) with anti-spyware and real time protection enabled and using security intelligence version 1.421.639.0 and was not able to reproduce the problem. The download of the javy-x86_64-windows-v4.0.0.gz succeeded. The executable file attached to the release has the exact same md5sum as the executable artifact created by the build assets GitHub Actions workflow job that ran for the 4.0.0 release so it's safe. I'm not sure why it would be flagged as a virus.
Do you mind sharing which browser including version you used to attempt the download?
Do you have any additional anti-virus software running?
In terms of how the release artifact is generated, this GitHub action workflow run backed by this source code created the release artifacts. You can click on summary and see check the artifacts created as part of that job if you're curious.
FWIW, I ran the gz file through virustotal.com and it says:
No security vendors flagged this file as malicious
The sha-256 hash for the file matches up with the sha-256 hash attached to the release for that binary as well.
Do you mind sharing which browser including version you used to attempt the download?
Chrome Version 131.0.6778.108 (Official Build) (64-bit)
But I checked also with Edge and the issue is the same : Windows Defender detect a virus.
Do you have any additional anti-virus software running?
No.
Here is what I have in my protection history.
Are you able to reproduce this issue on other Windows 10 machines or Windows 10 virtual machines running Windows Defender?
Are you able to reproduce this issue on other Windows 10 machines or Windows 10 virtual machines running Windows Defender?
Windows 10 machines (no virtualisation used)
I have seen many other false positives with Windows Defender. It may be that the slightly modified version 4.0.1 is no longer detected as a virus. I think the best thing you can do is just wait and see ;)
|
GITHUB_ARCHIVE
|
This is my first post, and it may seem dumb but it's stumping me. I have just recently completed my first build.
I have a:
MSI K8N Neo Platinum
80GB/8mb WD SATA HDD
512mb Mushkin PC3200 RAM
AOpen CDRW/DVD combo drive
one extra ethernet card
I tried to make a good mix of power, performance, and price. I chose a cheaper case, made by DynaPower that came with a 430w powersupply (thinking that would have been enough). I connected everything and started it up. After loading up WinXP Pro etc. I have a few problems. First, I would get a message saying that my system was automatically turning the settings down on my video card to protect my system because it was not getting enough power. Along with this my Floppy drive wasn't working. I made sure it was enabled in Bios and all cables were connected and seated properly. But the light on the drive wouldn't come (signaling it was detected)on when you booted up. No spinning, no nothing. Just to make sure it was getting power, I flipped the floppy cable and the green LED went on, and wouldn't come off (showing that the floppy cable was connected incorrectly). I flipped the cable back, uninstalled the floppy drive and controller, rebooted and checked device manager...everything came up copasetic...but the drive still doesn't work (the only reason I have a floppy is because my wife loves them.....don't get me started on that).
Next, the CD/DVD player will play cd's, write CD's, and play older video games, but when I throw in FarCry to load it up, or a DVD it won't recognize it and my system will lock up so that I have to go in through the task manager kill it.
The final issue is that I am also getting a BSD that says PFN_LIST_CORRUPT. After looking that up, it said that Windows is trying to write something to an invalid memory section...what's that about?
NOW, I think that it's all a power issue and I am going to buy an Antec true power 480.
Before I do that, does anyone have any suggestions or somethign that could help me out?
Firs thing to do is to check your voltages in bios. Make sure all the rails are stable, and within 10% of rated. Then make sure memory and cpu etc are getting enough voltage.
Try relazing the timings on your memory, make sure you have the DVD drivers loaded, and try a different floppy cable.
|
OPCFW_CODE
|
For a bit of fun recently, I’ve been working on a library that provides access to an RFB via SDL surfaces. Originally, I started writing this so I could manipulate output from QEMU guest machines (as QEMU has a VNC server built-in), but I realised that the code was getting close to something that might be useful to other people, so I’ve spent a little time cleaning it up in the hopes someone else finds it useful!
As a library, this project provides functionality for initialising a VNC connection, and creating a thread that continually requests framebuffer updates from the server. Users can access a surface that’s continually updated by the server-watching thread to be an up-to-date representation of the RFB. An
SDL_Event entry is registered for when the server closes the connection. Users can also send pointer and key events to the VNC server using straightforward functions.
I’ve also included a simple VNC client that acts as an example usage of the library.
I’ve tried to mirror the core SDL projects (SDL, SDL_ttf etc.) in how the library looks and functions from the perspective of a user. If you look at the commit log, you’ll see I renamed a lot of identifiers to match the naming style of those projects. I also plan on adding doxygen documentation for the user-facing functions. My library is licensed under the zlib licence, same as SDL2, for ease of use.
Before writing this, I was getting by by adapting a different SDL-VNC library I came across, that had been written for SDL1. I ended up changing the code enough that I figured a re-write would be useful, but thanks to A. Schiffler for the prior art! Unfortunately, the email address mentioned in that library’s source code is hosted at a domain that seems to have expired; if anyone knows how I could contact the creator, I’d appreciate it.
The working title for the library has been SDL2_vnc, though I understand that’s a generic enough name that it might conflict with someone else’s project; I’d be happy to change the name in this case.
SDL2_vnc can be found at the link at the top of this topic. Code review/suggestions are more than welcome, as are issues and contributions. If issues/contributions are difficult to do on that platform (I barely use it usually), I’d be happy to move the project to a different home.
I have a lot of ideas on how to improve/add to this project (for example adding any testing at all), but I’d love to hear suggestions from any interested parties! In particular, I’d really appreciate comments on my use of SDL in this project: I’ve done some small projects using SDL in the past but I’m sure there are still SDL features/quirks/best-practices I’m sleeping on. I would also really appreciate suggestions on VNC servers to test with: the QEMU implementation only uses a portion of the standard, and in particular the VMs I have been using (mostly SerenityOS) don’t support hardware pointers in QEMU so I get laggy pointers.
So yeah, I’d be happy to answer any questions anyone has, and hope that someone can find a use for this lil library of mine (:
|
OPCFW_CODE
|
Project Unify Team
Project Unify: Blueprints and proofs of concept for multi-domain interoperability
Project Unify is a cross-industry and cross-sector advanced development project of Stewards of Change Institute and its National Interoperability Collaborative (NIC). The project is exploring the use of existing and draft standards to implement secure information sharing and interoperability across healthcare, behavioral health and human services, as well as other domains (education, child welfare, etc.) that encompass the social determinants of health and well-being for individuals, families and communities.
Unify’s goals are to:
- identify the challenges of implementing interoperability among healthcare, behavioral health and human services, as well as other domains (education, child welfare, etc.);
- develop technical implementation agreements to overcome these challenges, based on existing and developing industry standards, and;
- define the necessary governance policies between domains.
The Unify team is developing “blueprints” (or “implementation agreements”) based on open data, open standards and open-source technology. To prove these best-practices concepts, we deliver demonstrations of how to meet the interoperability needs of Unify’s user stories. We also plan to incorporate other NIEM domains, such as courts, into our proof of concept.
Ultimately, the ability to securely exchange data relating to health, human services, education, child welfare, etc. will facilitate a more-comprehensive and complete view of individuals. This holistic view will, in turn, facilitate care coordination and other collaboration by health and human services programs to more-effectively address those individuals’ needs.
Project Unify seeks to demonstrate that health, human services, education, and justice information systems can:
- discover common patients/clients/consumers;
- map and model data based on domain-specific data model and element definition standards (e.g., FHIR, NIEM, CEDS/EdFi/PESC, etc.);
- exchange data among systems using domain-specific source or target standard protocols;
- define content based on domain-specific standard vocabularies (e.g., LOINC, ICD-10, etc.);
- implement exchanges based on privacy rules associated with HIPAA/FERPA/etc.; and
- protect those exchanges with security best practices (e.g., SMART, OpenID, OAuth2, etc.).
Participants in Project Unify
The project is being conducted by NIC’s Let’s Get Technical (LGT) group, the MITA Technical Architecture Committee (MITA-TAC), and additional partners who are contributing in a variety of ways to further this potentially transformative work.
Project Unify is a collaboration by a diverse team of experienced and knowledgeable technologists and subject matter experts representing information domains including (but not limited to) healthcare, behavioral health, child welfare, adult social services, housing, education, nutrition, public health and public safety.
How Do I Join?
We welcome additional technical and subject-matter experts, developers, documenters and testers, along with anyone who just wants to follow along or provide input and gain a better understanding of how to provide interoperable, cloud-based solutions relevant to Project Unify. Please contact Daniel Stein if you have any questions or want to volunteer for this effort.
|
OPCFW_CODE
|
Dries Van Noten Pre-Owned Designer Shoes
Dark grey crackle leather ankle boots with leather lining and soles. Inside leg silver zip. Patchwork panel in metallic silver lea...ther, beige, black and burgundy snake design. Almond toe and block leather covered heel. read more
the coolest dries in the finest thin and butter smooth leather
softest beautiful leather, running a little narrow on the ankle
Stivaletti Dries Van Noten bluette. No scatola no dustbag
Wunderschöne Enkelboots von Dries Van Noten. Farbverlaufe im Leder sind gewünscht. Absätzen haben kleine Beschädigungen.
Beautiful camel color boots with water snake heels. rnGood condition.
Ankle booties with python heel from Dries Van Noten in cognac. Boots have an inside silver zip. Genuine python high heel. Pointed ...toe. read more
Ankle boots in brown and black leather with gold metal buckle at the top of the shaft, and green python details on the heels. Zipp...er fastening on the inner sides. Some marks and scratches on the leather. No box or dustbag. Insole: 24 cm, leg height: 10.5 cm read more
A collector model by Dries, only worn a few times, same as all the items I sell, like new or near new and in excellent condition, ...the colour is black or a very dark anthracite grey but the items' chic comes from the transparent and concealed heels, they make you look as if you are walking on tiptoes, they will always get you noticed! read more
Botines en piel de Dries Van Noten. Con tacón y detalle Charol delante.
Stivaletto in vernice con tacco con pietre cangianti molto belli marca deies van noten tg 38
New Dries Van Noten boots 37,5 size, I bought no Farfeth.com
This ankle boot combined solid and metallic-stamped felt with a mirrored-leather heel for a tone-on-tone texture play that elegant...ly balanced the old and the new. From the fall 2014 collection read more
Beautiful leather black ankle boots Dries Van Noten. Only worn once, unfortunately they're too big for me. Mint condition, I refer... to the pictures. Heel = 11,7 cm; Platform = 1,5 cm read more
Dries Van Noten python printed ankle boots in green with a brown wooden heel. The leather is a great quality of snakeskin print in... a vibrant shade of green. They fit according to size EU38 and are very comfortable to walk in. Worn but well looked after. Will arrive in the original shoe box. read more
Dries Van Noten shoes, with a Velcro strip to fasten on the edges. In purple patent leather, with white topstitched details. Size ...38.5, suitable for a size 38. Rubber soles with "mini studs". In good condition. Sold with the box and dust bag. read more
Mid brown leather ankle boots with shiny "wood" heels. - Brown zip on inside boot - Heel height 11cm - Top of boot height up back ...of leg approx 12cm - Worn about 5 times, left heel has two scratches, right heel has tiny wear marks. Front toes have a small amount of wear - Black leather lining - With box and dustbag - Fits as a size 39 read more
low boots by Dries Van Noten, size 36 - patent leather body with leather lining, leather front, leather sole with original elastom...ere platform, 10cm heels - shoes in perfect condition, worn just once - Dries Van Noten: contemporary classic read more
usate pochissimo 2-3 volte, tacco particolare 8,5 cm, suola interna circa 22 cm, calza 35 stretto, pelle+pitone, presenta segni im...percettibili (no difetti ) read more
beautiful two tone platform (olive green suede with metallic silver) . rare to find!
"If there is a shoe that we want to get our feet lost in for fall 2015, it is Dries Van Noten's dreamy Purple Rain-style velvet bo...ots. Deep royal violet with a sumptuous coat, this pair walks a perfect line between rock star and romantic." Vogue Size 36,5Worn about 3 times. read more
DRIES VAN NOTEN flats Real leather size 38,5 insole 25 cm small abrasion on the tip of visible up close
Ravishing ballet flats with embroidered pattern.
Boots, worn once, like new. Very beautiful piece. I'm selling them because they're too small. The leather is gorgeous and the loaf...er stitching gives them great style. read more
Bottes en cuir noir. Le cuir brille un peu. elles arrivent vraiment sous le genou, très confortables, Talon n'est pas très haut 8 ...sm et demi stable read more
Black Leather Heeled Boots 3" Heel Size 37.5 Some wear on leather and heel
The Dries Van Noten boots have been crafted in full-grain leather which lightly creases on the ankle. This flattering knee-high pa...ir features an egg toe and rests on a 90 mm bootmaker heel. Seventies when worn with a flimsy midi dress. read more
Dries Van Noten black leather boots, perfect condition, very comfortable! With sports style details.
Beautiful dries van noten boots in size 37, never been worn! Comes with box and dustbag
Dries van Noten kniehoher Stiefel, schwarz, mit ReiBverschluB, schmaler Schaft, Plexiglasabsatz transparent orange, kaum getragen,... wie neu, Passform eher 38 read more
DRIES VAN NOTEN#DOESN'T NEED A DESCRIPTION
stivali in cuoio bi-color. Comodi con tacco h cm 6. Box compreso.
Dries Van Noten black leather Chelsea boots with red python platform sole. Almond toe. Leather pull tab. Elasticated side panels. ...Cushioned leather insole. The leather is as new with no signs of wear and in very good condition. Only small mark on the sole of the shoe, see pictures read more
Très jolies bottes Dries van Noten en cuir violet, souple et très confortable. Longue tige ajustée au mollet (34 cm de tour), avec... empiècement élastique sur la partie supérieure. Très bon état, portées à peine quelques fois. Livrées avec dust bag, dans la boîte d'origine. read more
Dries Van Noten heeled moccasins - size 40 - never worn - Patent leather upper, leather lining, leather front, leather sole with o...riginal rubber pad - 11cm heels - Pure shoes from Dries Van Noten, in a style that allows a lot of daring and yet is quite wearable as a contemporary classic. The achievement is amazing, made in Italy. 604063 read more
Dries Van Noten ballerina shoes. Guaranteed 100% AUTHENTIC! These shoes are constructed of black and white braided leather. Specif...ics: Size: 36 Color: Black and white Style: ballerina Made In: Italy Fabric Content: Leather Upper; Insole; Outsole Measurements (all are approximate): Insole Length: 9.5" Insole Width: 3" Outsole Length: 9.125" Outsole Width: 2,8" Condition: Pre-Owned. This item is in perfect condition. Some wear throughout outsoles. Light scratching and wear on insoles. read more
New unworn Dries van Noten trainers dark brown calfskin with golden colour piping With Orginal box. Purchased at Jeffrey's in NYC
Burgundy platform shoes with rubber soles.
Patent black Comes with bag Hardly worn
As new, worn only once. In excellent condition. Heel in Python skin. Size 38,5 EU. Insole length 25cm.
Dries Van Noted shoes, very rarely worn, in very good condition.
Black and white checked fabric iconic fashion show. Size 39 it. Fits 38.5. Perfect condition.
In Excellent Condition Red leather heels Size 38 Fits true-to-size
Floral printed cloth shoes with leather heels and details
|
OPCFW_CODE
|
I just saw the latest trailer for the new Mirror’s Edge and it prompted me to try a few a few ideas I had been kicking around in the back of my head related to glass rendering. This also revealed a few areas where things seem to be deficient with Unreal’s current set of rendering options. I’d recommend briefly watching this part of the trailer (starts at 4:35) for reference.
1. HDR Translucent Reflections
As far as I can tell from all the settings I’ve toyed with, translucent surfaces treat all reflections (even SSR) as LDR, which makes getting realistic glass materials basically impossible. I’m not even looking at the accuracy of the placement of the reflected image - simply the intensity of the reflections and how those should also influence the opacity of the glass surface.
The way it should ideally work is to take the HDR reflection captures and use the luminance of those captures as part of the opacity function. This is currently only possible if you have direct control over the reflection in the material editor (as was the case with UE3/UDK where all reflections were baked into a given material) or if you manually set up a scene capture actor in UE4.
Below is an example from real life illustrating what I’m referring to along with a test I made in UE4 to recreate the effect. Note in the photo how the luminance of the reflection causes the bright parts to remain opaque (100% obscuring the objects behind the glass), while the darker parts of the reflection become transparent.
Here’s the same effect in UE4; however, this was a manually created effect with a special (and expensive) 2D capture actor that was made just for this example. This kind of setup would be impractical in any real production and is only an example of what a real HDR reflection that drives opacity based on luminance could look like. In this example, I basically have the RenderTarget’s color going into an unlit translucent material’s color while its alpha goes into the opacity. I have a few multiply and power nodes affecting the alpha opacity, and I’m not handling fresnel reflections in this example, but it illustrates the reflected luminance opacity adequately.
2. Ability to tag only specific objects to be rendered by Scene Capture Actors (either 2D or Cube)
In the trailer linked above, after the character crashes through the glass, take a look at the white reflective wall to the right. They’re using an interesting trick of compositing accurate 2D Scene Captures (or their equivalent in that engine) of the characters on top of more generalized sphere or box cubemaps.
A few seconds later at the 4:46 mark, they do the same thing with the large glass wall in front of the character (she runs toward it and kicks the goon through it) with the two characters having an accurate 2D capture composited on top of generalized cubemap reflections.
Obviously in both cases, the surfaces (the flat white wall and large glass panel) were given special attention with the 2D capture actors being manually set up. This isn’t an automated effect by any means, but it definitely seems like an efficient one by only having the reflection captures rendering small and important objects accurately while the rest of the reflections can be generalized.
Right now this effect is sort of achievable in UE4, but it would be nice to have a little more specific control. At this point, it’s possible to tell a reflection capture actor what classes of objects to capture (skeletal meshes for instance), but it would far nicer to also have the ability to simply flag specific assets, regardless of class, to be captured or ignored. For example, I may only care about having the player character and a few important meshes captured accurately in a scene, but that small batch of assets may include both skeletal meshes and static meshes, which would then force the capture to render far more objects in an inefficient way.
Also, even if I only need to render skeletal meshes (as an example), the Render Target texture doesn’t currently output a nice alpha to use for compositing. I managed to make one by basically blowing out the levels (multiplying by something absurd like 200) and then clamping the captured image, but that resulted in a 1 pixel halo around the captured elements. This part isn’t all that significant but something worth noting.
3. Return of SceneCaptureReflectActor
I hadn’t really had a need to check for it until now, but I was completely baffled by the fact that the UE3/UDK SceneCaptureReflectActor has completely disappeared. Screen space reflections are great for a lot of things, but they simply aren’t suitable for all scenarios (for instance, actual mirrors that first or third person characters can get close to - especially in small areas where the rendering overhead isn’t an issue).
I’ve done some poking around and found this video by R Villani (who kindly uploaded his scene as well), but these Blueprint methods for matching a SceneCapture2D’s rotation/translation to the player’s own camera only work if there aren’t any objects behind the mirror that would get picked up and rendered by the SceneCapture2D camera. If we had the ability to selectively flag specific objects to be ignored (point #2 above), then this method could possibly work, but otherwise any scene or area that needed a reflected mirror would need an empty void behind the mirror plane.
I guess my question is, are we going to see the old SceneCaptureReflectActor return to UE4 at some point? It seems far too useful a tool for creating certain effects to simply be abandoned.
|
OPCFW_CODE
|
Is intention the result of rational or logical thinking or some other?
According to one of many sources, the definition of intention is described as Right intention is the intention and resolve to give up the causes of suffering, to give up ill-will and to adopt harmlessness. It contrasts with wrong intention, which involves craving for worldly things (wealth, sex, power) and the wish to harm.
What is unclear is whether intention is an act or thought that precedes or follows rational thought?
My understanding is that intention plays a critical role in one karma. If my thought is the result of anger however i do not have intent to harm is that not the result of rational thought?
If i do act irrationally, to save or protect a life, is that not intention without thought?
EDIT
To add to ChrisW's question on clarification, when i say that if my thought is of anger e.g. how this person would disappear or no longer be around, but this is not followed by action because rationally i do not wish to cause harm.
The second example of irrational thought is that when caught in the spur of the moment, i may say or do things however this is to protect a child.
I hope I understood the question in the second paragraph, i.e. "precedes or follows?"; but I don't think I understood the last two paragraphs. I don't know, maybe adding a for-example would help to illustrate the last two paragraphs.
@ChrisW - Apologies. I have now added examples. Hopefully that provides further context. Let me know if it still unclear and i will endeavor to update it further.
Lets take 2 examples where the action are the same:
A surgeon takes a knife and cuts through a person's stomach to save his life
A robber takes a knife and cuts through a person's stomach kill him and take his belongings
In both cases the action is the same but what differs is the intentions. Hence it is the intention that counts.
Now lets take another example.
A baby squirrel falls into the sea and is washed away. The mother squirrel dips her tail into the sea and runs to the shaking off water with the intention of drying water of the whole sea.
The intention is good but irrational. Regardless of rationality the Karmic effects would nevertheless good as the intention is good.
Sirinath Salpitiko - Thanks. What about thoughts? Thoughts come and go and from what i have gathered so far is that one has no control over them. So what happens when you have a thought that is unpleasant or brings to surface bad memories?
I have covered thoughts in many of my other answers. Perhaps you can scan through them.
Dependant origination(DO): depending on contact feeling arise, depending on feeling craving arise, ..craving..clinging, ..clinging..becoming, ..becoming.. there is birth. etc..
From contact until the point of birth there is no kamma (to be precise it's becoming).
Birth of what?
Birth of another thought, of a person committing an action good or bad.
Feeling of anger leads to birth of angry thoughts. If the thought is hold onto and contact (bringing to mind of that thought), then feelings arise again( it may again be anger or sympathy depending on the thought) and the whole chain of DO repeats.
Through mindfulness of feeling we let it ceased before it goes to craving(or aversion).
When there is craving we let it go before it becomes clinging.
When there is clinging we let it go before it progresses to becoming.
When becoming, we let it go before the birth of another thought or action.
Intention comes after becoming and when mindfulness is strong there is some space to reflect before the birth of acting. That's how I understand DO to operate.
@Samadhi - Isn't it the thoughts that lead to the emotions and not the other way around?
It always starts with contact followed by feelings, but most don't see it because it went that fast and they are aware only of the thoughts, which then generate further feelings. Suppose there is someone you hate/love when you see him/her which comes first thoughts or feelings. There are times when we like or hate a particular viewpoint then it is mental object (thought) which is the object, the mind contacts. Then feeling arise from contacting that viewpoint..
Enlightened beings hold no view as views are a conditioning from the past which will lead to discord.
When you think rationally, you may not have the intention to harm others. But in a moment of anger when you are not thinking rationally, you might suddenly have the intention followed by the action to cause harm.
Harm caused by anger to one who is angry is described in the Kodhana Sutta, that I quote here:
An angry person is ugly & sleeps poorly.
Gaining a profit, he turns it into a loss,
having done damage with word & deed.
A person overwhelmed with anger
destroys his wealth.
Maddened with anger,
he destroys his status.
Relatives, friends, & colleagues avoid him.
Anger brings loss.
Anger inflames the mind.
He doesn't realize
that his danger is born from within.
An angry person doesn't know his own benefit.
An angry person doesn't see the Dhamma.
A man conquered by anger is in a mass of darkness.
He takes pleasure in bad deeds as if they were good,
but later, when his anger is gone,
he suffers as if burned with fire.
He is spoiled, blotted out,
like fire enveloped in smoke.
When anger spreads,
when a man becomes angry,
he has no shame, no fear of evil,
is not respectful in speech.
For a person overcome with anger,
nothing gives light.
|
STACK_EXCHANGE
|
[MI 4.2.0-Alpha] CApps are not listing in MI Dashboard when trying to deploy faulty CAPP
Description
Hi Team,
If there is any faulty CApp deployed in the MI 4.2.0-Alpha version, then MI Dashboard does not list any Carbon Application. Even the correct CAPPs are not getting listed and displaying the loading page as below.
Also, the network tab, it is showing a 500 - internal server error, for the below request.
Request URL:
https://localhost:9743/dashboard/api/groups/mi_dev/capps?nodes=dev_node_2&searchKey=&lowerLimit=0&upperLimit=5&order=asc&orderBy=name&isUpdate=false
Response:
{
"message": "Internal server error"
}
It is not showing any errors (except the faulty CAPP) in the MI server side or MI dashboard logs.
However when trying to Get Carbon Applications with the Management APIs[1] it is listing all the CAPPs(faulty CAPP and correct CApps).
Request:
curl -X GET "https://localhost:9164/management/applications" -H "accept: application/json" -H "Authorization: Bearer eyJraWQiOiJmMWUzY2VhYS0wYjIzLTRkYzYtYTc0MS03NDhlMTFjMjYzY2QiLCJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczpcL1wvMTI3LjAuMC4xOjkxNjRcLyIsInN1YiI6ImFkbWluIiwiZXhwIjoxNjc1MTU1NDMxLCJzY29wZSI6ImFkbWluIn0.BXXT2R0Twj9EWcSOvZE95YtmN9sblrOGtufbFl1pG2nQHg-Zam21Hrk1S7oS_6TUIsdl1owVrHXr5E_DFVrdserSpN5_l-JebMNz-MU4-WMDn4emaOhKBrocdWR_GLYYNoXg7UgZr7NYBPxBnbHKbKrKDqN79bBekqsHZ_bsk9De7Pxp-MFH44uQFmuP1Cr72EANvaj_-j3TsSLi2spP1TaK76VT9J_wrYmc66SnUrH8WOf-Jhq02FqzdstfhvxHE07KTkmlPfMzVXz3OGvedNUfFJT0YjrtZWV1yk3OT-PjQ1De3fAIrytmwH83a8PT2pNr-br2ADG8HKymTAkSjA" -k -i
Response:
HTTP/1.1 200 OK
Authorization: Bearer eyJraWQiOiJmMWUzY2VhYS0wYjIzLTRkYzYtYTc0MS03NDhlMTFjMjYzY2QiLCJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczpcL1wvMTI3LjAuMC4xOjkxNjRcLyIsInN1YiI6ImFkbWluIiwiZXhwIjoxNjc1MTU1NDMxLCJzY29wZSI6ImFkbWluIn0.BXXT2R0Twj9EWcSOvZE95YtmN9sblrOGtufbFl1pG2nQHg-Zam21Hrk1S7oS_6TUIsdl1owVrHXr5E_DFVrdserSpN5_l-JebMNz-MU4-WMDn4emaOhKBrocdWR_GLYYNoXg7UgZr7NYBPxBnbHKbKrKDqN79bBekqsHZ_bsk9De7Pxp-MFH44uQFmuP1Cr72EANvaj_-j3TsSLi2spP1TaK76VT9J_wrYmc66SnUrH8WOf-Jhq02FqzdstfhvxHE07KTkmlPfMzVXz3OGvedNUfFJT0YjrtZWV1yk3OT-PjQ1De3fAIrytmwH83a8PT2pNr-br2ADG8HKymTAkSjA
activityid: 5b3615ab-a7a5-4688-a6e4-c95be7898356
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE,OPTIONS, PATCH
Host: localhost:9164
Access-Control-Allow-Headers: Authorization, Content-Type
accept: application/json
Content-Type: application/json; charset=UTF-8
Date: Tue, 31 Jan 2023 08:06:54 GMT
Transfer-Encoding: chunked
{"count":2,"list":[{"name":"helloCAR","version":"1.0.0","artifacts":[{"name":"head_jaxrs_basic","type":"api"},{"name":"head_yahooapis","type":"api"},{"name":"head_starbucks","type":"api"},{"name":"helloep","type":"endpoint"}]},{"name":"testnewcar","version":"1.0.0","artifacts":[{"name":"calsignatureunlock","type":"sequence"},{"name":"calsignatuteconfirm","type":"sequence"},{"name":"calsignature","type":"sequence"},{"name":"max_money","type":"proxy-service"}]}]}
[1] https://apim.docs.wso2.com/en/latest/observe/mi-observe/working-with-management-api/#get-carbon-applications
Steps to Reproduce
Downloaded the wso2mi-4.2.0-alpha and wso2mi-dashboard-4.2.0-alpha product versions from here[2]
Configured the MI with the MI dashboard.
Created and deployed sample erroneous CAPP in the wso2mi-4.2.0-alpha. (MI_HOME/repository/deployment/server/carbonapps,)
Login to the dashboard and navigate to the carbon application section.
[2] https://github.com/wso2/micro-integrator/releases
Affected Component
MI
Version
4.2.0-alpha
Environment Details (with versions)
No response
Relevant Log Output
No response
Related Issues
No response
Suggested Labels
4.2.0-alpha
4.2.0-beta
Did a fix, will test further and finalyze tomorrow.
|
GITHUB_ARCHIVE
|
g. edward griffin bitcoin
Looking for g. edward griffin bitcoin? Download Free Mining Software g. edward griffin bitcoin.
Fiat cash transactions involving folks are performed by an middleman, a bank or economical establishment. Transaction Bytecoin dependability is guaranteed by an agent who conducts the transaction.
The Bytecoin surge may additionally be attributed on the announcement of the new functions, which incorporate allegedly never-just before-carried out untraceable tokens - often called “digital property” or “coloured cash”.
The up-to-date Variation prevented blocks with malicious transactions being mined and thus no more coins could possibly be produced. The questions still stay with regards to the cryptocurrency exchanges and wallets, who're supposedly “Risk-free to follow the preceding Variation of program”, in accordance with the Bytecoin statement, but “encouraged to update the protocol”.
in reality, When the thought of untraceable tokens (untraceable digital property) will become a actuality this yr as promised from the Bytecoin roadmap, the foremost traits with the crypto environment could in theory converge: the booming ICO phenomenon, the increasing capitalisation of tokens developed on top rated of varied blockchain platforms, as well as the growing current market curiosity in untraceability and privateness. We are below to watch and find out.
Bytecoin can be an open up decentralized cryptocurrency. everyone intrigued can be part of Bytecoin Bytecoin community and choose aspect in currency progress. As well as the Internet, Bytecoin is Intercontinental by its nature.
The development team states that it patched the bug and labored Along with the mining expert services to update their software (that validates the transactions inside the community), once the bug was discovered.
The only thing you might want to do should be to obtain special software that should create a wallet for you. With support of this software you will be able to mail income to other end users and get payments from them.
The price hike happens on the flourishing investor fascination in cryptocurrency marketplaces, and specifically upon the escalating general public appreciation of untraceable cryptocurrencies that consist of privateness mechanisms (other examples are Monero, sprint and Zcash, that have also knowledgeable Bytecoin a rise in value while in the modern months).
Bytecoin has fashioned its personal network that consists from consumers who use Bytecoin for mutual settlements. Bytecoin community is open up and any individual willing to join is welcome to become a user of Bytecoin currency.
eventually considered one of desktops within the community might be Fortunate to discover the right block framework. Then this Pc places the block into its blockchain file, which represents the databases for all done transactions.
We patched it pretty a while ago, and confirmed that the Monero blockchain had under no circumstances been exploited using this, but right until the tough fork that we experienced some weeks ago we had been Bytecoin unsure regarding whether or not all the network experienced up to date.
John areas an buy on the website. ideal following that he gets one BCN Monthly bill that contains the pizzeria Bytecoin wallet handle. John broadcasts the next instruction to the Bytecoin network: send 1 BCN from John_address to pizzeria_address.
In spite of this bug discovery and patching, the CryptoNote-based mostly cryptocurrency markets, like Monero and Bytecoin, continues to be constructive, retaining them Amongst the top ten by capitalisation. Whether it is since the coin holders usually are not properly-informed on the protocol troubles or They are really self-confident of the development groups’ power to manage these concerns, The very fact remains that Monero’s and Bytecoin’s capitalizations jointly amount of money to $750,000,000 at the time of creating, and Therefore numerous early adopters have absent from rags to riches.
Mining inside the network results in generation of recent money, which serves given that the reward for users who use their computing power to be able to approach transactions.
There's two ways to obtain Bytecoins. You can take part in network maintenance and acquire a reward for it, or invest in BCN immediately on exchanges.
In so performing all money stored with your wallet is now guarded during the transaction processing and the security doesn’t depend Bytecoin on community dependability. Your money is safe in any case.
|
OPCFW_CODE
|
Respect include_metadata_changes argument when creating listeners (#48)
Respect include_metadata_changes argument when creating listeners (#48)
Expose firestore settings (#49)
Switches to wrapping the C++ Firestore object with a Swift class so that the Firestore object takes on reference semantics. This allows for code like Firestore.firestore().settings = ...
Renames Settings to FirestoreSettings, also providing a wrapper. This time using value semantics. But the wrapper helps hide the C++ methods that are named differently than the Obj C API we are trying to emmulate.
Expose isPersistenceEnabled setting.
Expose clear persistence function (#50)
expose clear persistence function
Expose Firestore.terminate (#51)
Also fixes signature of clearPermanence to allow for a nil completion
callback to match the Obj C API.
Fixup the signatures of clearPersistence and terminate (#53)
While the underlying C++ layer does not generate errors for these
calls, the Obj C API exposes errors on these calls, so we should
here as well.
firebase: fully qualify a type name
This corrects a failure to resolve the typename FutureBase when
building for Android.
Modernize implementation of FirebaseAuth methods that return a Future (#54)
Also provides completion variants of async methods to support compatibility with the Firebase Objective C API.
Other changes:
Fixes return value of signOut.
Removes workaround for SR70253 as it is no longer reproducible / appears to have been resolved.
Wraps C++ Auth type in a class to provide reference semantics.
build: add support for platform selection in firebase-cpp-sdk
swift-firebase uses the firebase-cpp-sdk as a foundation for building
the Swift interfaces. This is a native C++ library which requires a
variant for each supported platform. Generalise the path selection to
support additional platforms such as Android.
Change User to be a class wrapper (#57)
This way the type has reference semantics, allowing for code like:
if let user = auth.currentUser {
user.reload { ... }
}
This way the API is more compatible with the equivalent Obj C API.
FirebaseAndroid: introduce the new JNI module for Android support
In order to interact with the C++ Firebase SDK, we need to have access
to the JVM and the JavaEnvironment. Introduce a JNI layer which allows
us to capture this information when loaded. Additionally, register a
company.thebrowser.Native.RegisterActivity method to allow us access
to the Activity. This is required to be called before the Firebase
layer is accessed by the application.
FirebaseAndroid: introduce SwiftFirebase JAR
This introduces the Java bindings for the JNI backed implementation.
When implementing an Android application which uses swift-firebase,
SwiftFirebase provides the bridge to the JNI functions required to
enable the functionality.
FirebaseCore: port to android
This adjusts the API usage to account for signature differences on
Android. This is the first step towards setting up the android port.
FirebaseAndroid: fix & remove nullability annotations
The nullability annotations seem to cause warnings due to
-Wnullability-completeness. Correctly annotating the types as
_Nullable JavaVM * _Nonnull seems to not work across Swift and C/C++.
Rather than try to force this, simply avoid the nullability annotations.
Clear up the method registration which was never being invoked due to an
early return in the success case.
Additional changes to align to the Obj C API (#61)
user is not optional
add missing getIDToken variant
add missing delete variant
Cleanup error code types (#62)
Goals:
Allow consumers to cast Error instances received to AuthErrorCode or FirestoreErrorCode, corresponding to the API used.
Allow consumers to write code like AuthErrorCode.userDisabled.
Allow consumers to write code like AuthErrorCode(.userDisabled).
Make sure {Auth,Firestore}ErrorCode conform to RawRepresentable with a RawValue of type Int.
Also changes:
Removes FirebaseError now that it is no longer needed. This required changing resultAndError to be generic on the error type returned. This is necessary since the underlying C++ API does not have any type information for the error codes produced (they are just integers). At the Swift layer we want to generate the right concrete error types.
This is all intended for better conformance to how these types are reflected to Swift from Obj C.
Intentionally not trying to make these error types extend from NSError at this time. That may be interesting follow-up work.
Add FirebaseFunctions module (#63)
Changes:
Updates workflow to a version of firebase-cpp-sdk that includes functions and storage headers and libs.
Adds incomplete support for mapping between Swift Any and C++ firebase::Variant types -- just what is needed by Arc for now.
Adds FirebaseFunctionsErrorCode following the pattern for FirestoreErrorCode, et. al.
Adds C++ type FunctionsRef as a copyable wrapper for the C++ Functions type, so that it can be exposed to Swift.
Adds HTTPSCallable and HTTPSCallableResult wrapper types.
Adds basic subset of Functions interface -- just what is needed by Arc for now.
FirebaseFunctions API correctness fixes (#64)
Changes:
Make HTTPSCallableResult.data public
FirebaseFunctionsErrorCode should just be FunctionsErrorCode
Add FirebaseStorage module (#65)
Adds the subset of FirebaseStorage needed by Arc:
Storage (partial)
StorageReference (partial)
StorageMetadata (partial)
StorageErrorCode
The implementation of StorageMetadata.customMetadata required some extra thunking through the CxxShims lib.
FLOW-764: Pass callable reference by value to avoid crash
#66 👈
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @fiedukow and the rest of your teammates on Graphite
https://github.com/thebrowsercompany/swift-firebase/pull/67 is a better approach to it that hides that on a lower layer.
|
GITHUB_ARCHIVE
|
Top 11 C# MVC Projects
Open Source ASP.NET MVC Enterprise eCommerce Shopping Cart Solution
Business Apps Made Simple with Asp.Net Core MVC / TypeScript (by serenity-is)Project mention: quick start solution like: AspNetZero vs Serenity ? | reddit.com/r/csharp | 2022-03-09
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
An extensible framework to audit executing operations in .NET and .NET Core.Project mention: How would you handle audit logging to a database? | reddit.com/r/dotnet | 2022-05-19
I’ve had great success using Audit.Net. It supports a variety of different data stores and I believe it logs things in a background thread…I could be wrong though. At any rate, I have been using it with the EF provider and a custom table. No issues thus far.
Fluent testing library for ASP.NET Core MVC.
Library for easily paging through any IEnumerable/IQueryable in ASP.NET
An ASP.NET Core 6.0 IdentityServer4 Identity Bootstrap 4 template with localization
Helping you quickly build amazing sites
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
Razor-powered ORM for .NET
PersianDataAnnotations is ASP.NET Core MVC & ASP.NET MVC Custom Localization DataAnnotations (Localized MVC Errors) for Persian(Farsi) language - فارسی سازی خطاهای اعتبارسنجی توکار ام.وی.سی. و کور.ام.وی.سی. برای نمایش اعتبار سنجی سمت کلاینت
Kontent Boilerplate for development of ASP.NET Core MVC applications.Project mention: What Gatsby v4 brings to your static site? | dev.to | 2021-09-21
If you're using Kontent by Kentico as a content source for your Gatsby site, you're probably using both of these packages:
Simplify.Web is an open-source, lightweight and fast server-side .NET web-framework based on MVC and OWIN for building HTTP based web-applications, RESTful APIs etc.
C# MVC related posts
[Parte 7] ASP.NET: Creando un Sistema Auditable
2 projects | dev.to | 9 Apr 2022
[EF Core] How would you handle modeling of something akin to github issues?
1 project | reddit.com/r/dotnet | 27 Aug 2021
¿Do you know a CMS compatible with angular?
1 project | reddit.com/r/Angular2 | 5 Aug 2021
Can I somehow test the routs of a web app? what controller/action hitting a url will go to?
1 project | reddit.com/r/csharp | 24 Apr 2021
Faster Content Modeling with Kimmel
1 project | dev.to | 6 Apr 2021
How to Audit Your ASP.NET Core WebApi
1 project | dev.to | 12 Feb 2021
Trouble setting up authentication for a dotnet web application web api project.
1 project | reddit.com/r/dotnet | 3 Feb 2021
What are some of the best open-source MVC projects in C#? This list will help you:
Are you hiring? Post a new remote job listing for free.
|
OPCFW_CODE
|
In laptop imaginative and prescient and robotics, simultaneous localization and mapping (SLAM) with cameras is a key matter that goals to permit autonomous programs to navigate and perceive their atmosphere. Geometric mapping is the primary emphasis of conventional SLAM programs, which produce exact however aesthetically primary representations of the environment. Nonetheless, current advances in neural rendering have proven that it’s attainable to include photorealistic picture reconstruction into the SLAM course of, which could enhance robotic programs’ notion talents.
Present approaches considerably depend on implicit representations, making them computationally demanding and unsuitable for deployment on resource-constrained gadgets, though the merging of neural rendering with SLAM has produced promising outcomes. For instance, ESLAM makes use of multi-scale compact tensor elements, whereas Good-SLAM makes use of a hierarchical grid to carry learnable options that mirror the atmosphere. Subsequently, they collaborate to estimate digital camera positions and maximize options by decreasing the reconstruction lack of many ray samples. The method of optimization takes lots of time. Subsequently, to ensure efficient convergence, they need to combine related depth info from a number of sources, comparable to RGB-D cameras, dense optical movement estimators, or monocular depth estimators. Moreover, as a result of the multi-layer perceptrons (MLP) decode the implicit options, it’s normally required to specify a boundary area exactly to normalize ray sampling for greatest outcomes. It restricts the system’s potential to scale. These restrictions recommend that one of many major objectives of SLAM real-time exploration and mapping capabilities in an unfamiliar space using transportable platforms can’t be achieved.
On this publication, the analysis staff from The Hong Kong College of Science and Expertise and Solar Yat-sen College current Photograph-SLAM. This novel framework performs on-line photorealistic mapping and actual localization whereas addressing present approaches’ scalability and computing useful resource limitations. The analysis staff hold observe of a hyper primitives map of level clouds that maintain rotation, scaling, density, spherical harmonic (SH) coefficients, and ORB traits. By backpropagating the loss between the unique and rendered footage, the hyper primitive’s map allows the system to be taught the corresponding mapping and optimize monitoring utilizing an element graph solver. Somewhat than utilizing ray sampling, 3D Gaussian splatting is used to provide the photographs. Whereas introducing a 3D Gaussian splatting renderer can decrease the price of view reconstruction, it can’t produce high-fidelity rendering for on-line incremental mapping, particularly when the scenario is monocular. As well as, the research staff suggests a geometry-based densification approach and a Gaussian Pyramid-based (GP) studying technique to perform high-quality mapping with out relying on dense depth info.
Crucially, GP studying makes it simpler for multi-level options to be acquired step by step, considerably enhancing the system’s mapping efficiency. The research staff used a wide range of datasets taken by RGB-D, stereo, and monocular cameras of their prolonged trials to evaluate the effectiveness of their recommended technique. The findings of this experiment clearly present that PhotoSLAM achieves state-of-the-art efficiency when it comes to rendering pace, photorealistic mapping high quality, and localization effectivity. Furthermore, the Photograph-SLAM system’s real-time operation on embedded gadgets demonstrates its potential for helpful robotics functions. Figs. 1 and a couple of present the schematic overview of Photograph-SLAM in motion.
This work’s major achievements are the next:
• The analysis staff created the primary photorealistic mapping system based mostly on hyper primitives map and simultaneous localization. The brand new framework works with indoor and outside monocular, stereo, and RGB-D cameras.
• The analysis staff recommended utilizing Gaussian Pyramid studying, which allows the mannequin to be taught multi-level options successfully and quickly, leading to high-fidelity mapping. The system can function at real-time pace even on embedded programs, attaining state-of-the-art efficiency due to its full C++ and CUDA implementation. There might be public entry to the code.
Try the Paper. All credit score for this analysis goes to the researchers of this venture. Additionally, don’t neglect to hitch our 33k+ ML SubReddit, 41k+ Fb Group, Discord Channel, and Electronic mail E-newsletter, the place we share the most recent AI analysis information, cool AI initiatives, and extra.
Aneesh Tickoo is a consulting intern at MarktechPost. He’s at present pursuing his undergraduate diploma in Information Science and Synthetic Intelligence from the Indian Institute of Expertise(IIT), Bhilai. He spends most of his time engaged on initiatives geared toward harnessing the ability of machine studying. His analysis curiosity is picture processing and is enthusiastic about constructing options round it. He loves to attach with individuals and collaborate on attention-grabbing initiatives.
|
OPCFW_CODE
|
Prestantiousfiction The Beautiful Wife Of The Whirlwind Marriage webnovel – Chapter 1384 – Buying A Row At Once selective sponge read-p1
Brilliantnovel The Beautiful Wife Of The Whirlwind Marriage read – Chapter 1384 – Buying A Row At Once attractive ablaze read-p1
Great Han’s Female General Wei Qiqi
Novel–The Beautiful Wife Of The Whirlwind Marriage–The Beautiful Wife Of The Whirlwind Marriage
Chapter 1384 – Buying A Row At Once necessary beginner
Wu Yufei said indifferently, “They didn’t get something perfect?”
One appearance and they have been acknowledged, without an ounce of doubt.
Especially when one was able to get yourself a Benz at 200,000 yuan approximately, it had been not any longer as esteemed. Despite the fact that theirs was of your hundreds of thousands variety, not all people could tell.
The television station without delay well informed the company that the designer was performing the top cards. They considered it might be far better to have somebody a lesser major photo the next time.
swann’s way characters
Chapter 1384 Choosing A Row At One Time
Twenty Years Of Balkan Tangle
Lin Che obtained wished to stroll all over. With such an invitation, she could not say so.
How Is It My Fault That I Look Like a Girl!
Wu Yufei wiped over the water on her face and started her vision to check out Liang Shan. Instantly, she shouted, “You dare to splash drinking water at me? Do not you are aware how many enthusiasts I had and you also dare splash liquid at me? This experience produces you money. Will you afford to eradicate me?”
disney fairy books in order
Through at this area, Lin Che going by helping cover their Gu Jingze early on in the morning.
Liang Shan was eventually left angered, slamming the table.
Liang Shan instantly reprimanded her. “You have to be familiar with your photo. How can you permit it to occur? To think you received intoxicated. Seem just how many reporters had been exterior. You may turn into a laugh if they received you functioning to Gu Jingze.”
“Sister Yufei, what would you like to consume?”
The next day, Wu Yufei was known as to attend the company upon waking up.
The a.s.sistant felt it absolutely was just a little bizarre. There were people watching but he could not deny it.
Anyone only acknowledged the Benz signal. Most did not realize or know of various products. Lin Che was among those folks. Nevertheless she realized the cars at your house were actually costly, it was subsequently only a Benz to her and she thinking it should be cheaper than a Porsche. In actual fact, she failed to understand that many Porsches ended up just a couple hundred thousands of, not as expensive as some Benz.
“Are both of you searching? Please, come in. We have now new arrivals within the store. Obtain a look…”
A nearby tyc.o.o.n definitely.
Lin Che possessed wished to walk about. With your an request, she could not say so.
Furthermore, it had been a truth that her facial area introduced the funds. He would not disagree with the money tree.
However, he accepted it. Observing how intoxicated she was, he tolerated it.
Liang Shan was left angered, slamming the dinner table.
In addition, it was subsequently a truth that her facial area introduced the bucks. He would not disagree with all the funds plant.
Liang Shang was bewildered. Just where was the obedient and comprehension Wu Yufei of the past? Where by has she ended up and why was she an increasing number of tough to take care of?
Moreover, it was actually a well known fact that her encounter brought in the bucks. He would not argue using the cash shrub.
A local tyc.o.o.n really.
She bought off and on the airplane. As her flight was overdue, she appeared two hours later.
The a.s.sistant was like a kitty on very hot bricks. Wu Yufei was asleep with a spot.
“Wake up. Have a look at your self. What exactly are you aiming to do, reaching the union to strike a bother? Will you be sick and tired of existing?”
Specially when one could purchase a Benz at 200,000 yuan or so, it was not any longer as esteemed. Even though theirs was in the thousands variety, nobody could tell.
When the person possessed showed up, they were already three time delayed.
Liang Shan right away reprimanded her. “You have to be familiar with your appearance. How will you permit it to happen? To assume you acquired intoxicated. Seem the number of reporters were definitely outside. You could turn into a laugh as long as they received you jogging to Gu Jingze.”
Liang Shan was remaining angered, slamming the table.
Wu Yufei pretended not to pick up and proceeded to go in.
|
OPCFW_CODE
|
In this blog post I will deploy virtual servers within the Azure Portal using Powershell via Azure Cloudshell.
1) Login to the Azure Portal portal.azure.com
2) Click the Cloud Shell icon found towards the top of the portal
3) Click Powershell
4) Click Create Storage. If you want to configure custom settings, click Show Advanced Settings
5) and we’re connected
6) Before creating a Virtual Machine, I will create a resource group to where I will deploy my new VM. My new resource group is named CloudBuildPSRG (PS for PowerShell and RG for Resource Group). My location is UKSouth. You could create this resource group as part of the VM Build commands further down this blog post but for the purpose of this demo, I will create the resource group first.
New-AzResourceGroup -Name CloudBuildPSRG -Location UKSouth
7) If I visit the resource group area within the Azure Portal, here is my newly created resource group
8) We don’t want to only view the new resource group via the portal, let’s take a look at the resource group via PowerShell. Here is the code to display your resource groups
And here is the resource group
9) Let’s move onto creating a VM within this new resource group
Before running the below commands, i’ll explain what each line of code will do
-ResourceGroupName "CloudBuildPSRG" `
-Name "CloudBuildPSVM" `
-Location "UK South" `
-VirtualNetworkName "CloudBuild-PSVNET" `
-SubnetName "subnet1" `
-SecurityGroupName "CBNetworkSecurityGroup" `
-PublicIpAddressName "GBPublicIpAddress" `
-ResourceGroupName “CloudBuildPSRG” – I will use an existing Resource Group that I created in this blog post earlier. In the event the resource group does not exist, a new resource group will be created.
-Name “CloudBuildPSVM” – This is the name of the VM
-Location “UK South” – The VM will be built in region UK South
-VirtualNetworkName “CloudBuild-PSVNET” – I am creating a new VNET but you could also use an existing VNET name if you have already created one
-SubnetName “subnet1” – A new subnet will be created named subnet1. Again you could use an existing by specifying the name.
-SecurityGroupName – NSG name for the VM (Network Security Group)
-PublicIpAddressName “GBPublicIpAddress” – For the purpose of this lab, I will be creating a public IP address. This is something you don’t want to do for a production server. You could use Azure Bastion to connect to a VM from the portal, or connect to the VM from your internal network over VPN.
-OpenPorts 80,3389 – Opening ports within the NSG (Network Security Group) to allow access to the web service and Remote Desktop access. My next blog post will include the installation of IIS via powershell and testing access externally.
10) Let’s continue with running the script. After triggering the script, you’re prompted to create a new local admin username and password for the VM.
and the machine build is in progress
VM build successful
11) Let’s check the status of the VM
get-azvm -name CloudBuildPSVM
12) Let’s check the Azure Portal. There it is. The VM has been deployed in my existing resource group CloudBuildPSRG
13) I’ll now obtain the Public IP address of the VM so I can connect to it. (Note that this is a demo. In a production environment you don’t want to allow RDP access externally). The Public IP could also be obtained from the Azure Portal, but as we’re doing everything within PowerShell, let’s continue with Powershell.
Here is the command I will run to obtain the public IP address of my newly created VM
Get-AzPublicIpAddress -Name GBPublicIpAddress -ResourceGroupName CloudBuildPSRG | Select IPAddress
14) You can now connect to your server
This process creates a Windows 2016 Datacenter server, but what if you want to use a different image available within the Microsoft Azure Marketplace?
Let’s continue with building another VM but this time specifying what image we want to use.
15) Type Get-AzVMImageOffer -Location “UK South” -PublisherName “MicrosoftWindowsServer”
A Marketplace image in Azure has the following attributes:
- Publisher: The organisation that created the image. Examples: Canonical, MicrosoftWindowsServer
- Offer: The name of a group of related images created by a publisher. Examples: UbuntuServer, WindowsServer
- SKU: An instance of an offer, such as a major release of a distribution. Examples: 18.04-LTS, 2019-Datacenter
- Version: The version number of an image SKU.
MicrosoftWindowsServer is a VM publisher name. If you want to view all VM image publishers available within the market place in the UK South region, the command is as follows: Get-AzVMImagePublisher -location “UK South”
16) Here are the results from step 15. The below results show that I have a number of Microsoft Server authors available in the UK South region. I will be using WindowsServer
17) We now dig deeper and find out what images are available within the WindowsServer Publisher selection
Get-AzVMImageSku -Location “UK South” -PublisherName “MicrosoftWindowsServer” -Offer “WindowsServer”
and after running the command below, we have a selection to choose from:
18) Let’s deploy a 2012 R2 Datacenter server
19) Here is what the script look like this time.
-ResourceGroupName “CloudBuildPSRG” `
-Name “CloudBuildPSVM3” `
-Location “uksouth” `
-VirtualNetworkName “CloudBuild-PSVNET” `
-SubnetName “subnet1” `
-SecurityGroupName “CBNetworkSecurityGroup3” `
-PublicIpAddressName “GBPublicIpAddress3” `
-ImageName “MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest” `
-OpenPorts 80,3389 `
AsJob allows the command to run in the background allowing you to use PowerShell for other tasks and not have to wait for the script to complete, as you’ll see from the results below.
latest – is a command which requests for the latest image available
After running the script above, as you can see from the screenshot below the output is different because of the additional command -AsJob. The job is now running in the background which means I don’t have to wait for PowerShell to complete the process.
20) And we have successfully deployed a Windows 2012 R2 Datacenter server
|
OPCFW_CODE
|
Archive for the ‘Tech’ Category
Working on an Android application and aiming to support a variety of different hardware is an interesting challenge, not least of which being that the Android emulator lies to you about its capabilities.
I created a new AVD (android virtual device) and specifically set the parameter (which the documentation says defaults to “no” anyhow) so that there would be no camera:
Later on, just to check what my code was doing I added a log statement after I queried what the package manager thought would be the capability:
boolean hasCamera = packageManager.hasSystemFeature( PackageManager.FEATURE_CAMERA); Log.e("DeviceSettings", "Package manager said about camera: "+hasCamera);
And the output?
The emulator lies! How am I meant to do my job? *sigh*
Going to the Strangeloop conference today and need some way to capture my notes and practice typing on my iPad. Guess it’s time for ‘live blogging’ or something.
I am not a touch typist for sure but this on-screen keyboard isn’t too bad. Just strange the way that it is lagging so horribly behind my paltry typing speed. This simply won’t do! I have fifty thousand words to write in November, how will I manage that with this level of lag?
The wildly popular mySQL database has a nice installer for OS X. Mine installed to
/usr/local. I did the install ages ago and forgot what the “root” database user password was. Ooops! No matter, there are some great resources online talking about how to go about resetting permissions. I opted for the incredibly easy, but utterly insecure method (from inside the terminal) :
- Stop the existing server
- Restart telling it to turn off permissions:
mysqld_safe --skip-grant-tables --user=_mysql &
- Jump into the mysql shell application:
mysql -u root mysql
- Change the “root” password:
UPDATE mysql.user SET Password=PASSWORD('#########') where user='root';
- Persist privilages:
Next on the list was exposing my mySQL database to the Tomcat hosted PHP code. Step 1 was making sure I have mySQL JDBC drivers installed, Step 2 was creating a JNDI datasource in the context, and step 3 was getting the PHP to use it.
The officially supported mySQL Connector/J was a quick download. The JAR file for it was dropped into the
$CATALINA_HOME/lib directly. Good to go. The context file was more of a challenge, under the
$CATALINA_HOME/conf/Catalina/localhost I created an XML config file that matched the name of my webapp –
php.xml – where I declared the JNDI datasource. It took some tweaking and reading around but finally ended up coming together:
<Context reloadable="true"> <Resource name="jdbc/mydata" auth="Container" type="javax.sql.DataSource" username="root" password="#########" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/test" /> </Context>
The documentation for Quercus states
Scripts can use the jndi name directly:<?php // standard PHP //mysql_connect($host, $username, $password, $dbname); // using JNDI lookup mysql_connect("java:comp/env/jdbc/myDatabaseName"); ?>
But that doesnt help me to run WordPress as I’m not about to go modifying their code. Never fear, there’s another option. The documentation for Quercus also says that you can put an entry into the
WEB-INF/web.xml file that forces all database connections to go through the same underlying JNDI datasource:
. . . <!-- Tells Quercus to use the following JDBC database and to ignore the arguments of mysql_connect(). --> <init-param> <param-name>database</param-name> <param-value>jdbc/test</param-value> </init-param> . . .
So, a quick update to point at my own datasource and wall was ready to go. I downloaded and unzipped wordpress into the PHP webapp directory, pointed a browser at
http://localhost:8080/php/wordpress and found myself stepping through the famous 5 minute install. Less than 5 minutes later I was looking at the main page of a fully-functioning WordPress install.
I am generally a Java programmer (if my day-job is to be believed) and hacking my way through Apache configuration isnt my idea of a fun evening. For some odd reason my trusty old black MacBook hasnt had the best of times when I try to enable “web sharing” and then try executing PHP scripts – all I get is a dump of the source. Time for a creative solution to the problem!
The folks that created the Resin App Server have a 100% Java implementation of PHP 5 released under the Open Source GPL license, known as Quercus. So step 1 was to download the latest release that will run on non-Resin app servers: quercus-4.0.1.war.
This was the point where I also realized that I dont have an up-to-date install of Apache Tomcat on my system either. Step 2 was to go download Tomcat 6.0.20 and install it.
Many applications need to know where Tomcat lives, and that means setting the
CATALINA_HOME environment variable to point to the install. In a terminal I edited the global startup file for the terminal:
sudo vi /etc/bashrc to add the line:
Step 3 was to
$CATALINA_HOME/bin/startup.sh to start tomcat and then point a browser at
http://localhost:8080/quercus-4.0.1/ where I saw confirmation that all was well:
Testing for Quercus…
Congratulations! Quercus™ Open Source 4.0.1 is interpreting PHP pages. Have fun!
I didnt like the URL though, but that’s easily changed. I simply nuked the
quercus-4.0.1 directory under
$CATALINA_HOME/webapps and renamed the Quercus WAR file “php.war”. I bounced Tomcat and bingo – a “php” directory I could drop files into. As a final test, I created
<?php phpinfo(); ?>
and hit it on the pretty URL:
http://localhost:8080/php/info.php. All displayed correctly. I was off and rolling running PHP on top of Apache Tomcat! Stay tuned for Part II – getting WordPress running on top of PHP running on Tomcat!
How do I loathe thee? Let me count the ways.
I hate thee to the depth and breadth and height
My soul can reach, when feeling out of sight
For the ends of Being and ideal Grace.
I loathe thee to the level of everyday’s
Most quiet need, by sun and candle-light.
I despise thee freely, as men strive for Right;
I loathe thee purely, as they turn from Praise.
I loathe thee with the passion put to use
In my old griefs, and with my childhood’s faith.
I loathe thee with a hatred I seemed to lose
With my lost saints,—I loathe thee with the breath,
Smiles, tears, of all my life!—and, if God choose,
I shall but loathe thee better after death.
With apologies to the original author. How is it possible for a simple piece of computer software to draw out such ire? Grrr.
|Never before have so many people
with so little to say
said so much
to so few.
What an amazingly frustrating weekend. <sigh />
The annoying IBM Thinkpad refused to connect … neither wired, or wireless (on any one of three different wireless networks, no less). On the other hand my trusty MacBook hopped online in every given scenario. Oh, and before this turns into a Windows vs. Apple cage match, the Dell Inspiron laptop also connected flawlessly in every case.
Can I have those hours of my life back, maybe trade them in against 5 minutes of something more productive … like fly-fishing in an empty pond or something?
Oh, and running the build-tool (“maven”) offline proved to be interesting – looks like when I get back to the corporate network the central code repository has been “blacklisted” by the tool because it was unreachable in multiple builds. Just perfect!
The gaming site Gamasutra recently posted an article about EA building “Battlefield Heroes”, and how their use of an agile SCRUM-based development methodology allowed them to release in the aggressive timescales that the executives asked for. Very cool!
Every year during the National Novel Writing Month they publish a weekly podcast called “WrimoRadio“. I have been consistently impressed with the quality and have thought to send in a small editorial piece to them but was too busy with other things (like writing a 50,000 word novel in a month). I am very tempted to send in the “plot bunnies” piece this year though.
Podcast production is something I enjoy, I have the hardware investment and the previous experience. Should I ping him and offer? With everything else I am doing that would be one more reason I might not make 50,000 words. On the other hand I could put it on my resume – “podcast production for the National Novel Writing Month” – so it has its benefits too. Maybe I am oversimplifying what it would take … it wouldn’t be the first (or the last) time I’d done that!
|
OPCFW_CODE
|
Last post I talked about the high-level architecture of our Office Business Application for the new Northwind Traders. There are a lot of different architecture options to consider when building an OBA. OBA is all about using Microsoft Office with your Line of Business (LOB) data. Whether that involves using SharePoint as well depends on the application. Since we wanted to store the unstructured data (the Northwind customer P.O.) SharePoint is a good fit here.
There are a lot of options when thinking about how to expose your LOB data. For instance, you may already have a service oriented architecture at the enterprise that exposes data contracts and processes that you can consume from Office clients. Or maybe you have a small business and have decided to expose a simple service that returns and consumes n-tier DataSets directly. Or you already have a custom LOB data entry system using custom business objects and you want to reuse the business layer in the Office client. OBA doesn’t dictate how you expose this data. Because you can consume data in Office clients the same way you do in Windows apps the same types of decisions need to be made.
When we sat down to write the new Northwind Traders application we thought about how our data would need to behave and what would be the best way for all the pieces to easily update and query the Northwind database. Because there was only going to be simple validations needed on the data and mostly CRUD operations we opted to expose an Entity Data Model via ADO.NET Data Services like I showed before. This allowed us to get a secure service up and running in minutes.
We did make some minor changes to our old friend, the Northwind database. First, since we wanted to be able to look up order history for a customer when they emailed the sales reps, we needed to add an EmailAddress field to the Customers table (amazing that we didn’t have that field before!). We also added it to the Employees table.
ALTER TABLE dbo.Customers ADD EmailAddress varchar(50) NULL GO ALTER TABLE dbo.Employees ADD EmailAddress varchar(50) NULL GO
Then we populated the data with some customers and employees that were actually folks on our team because we need real email addresses to work with :-)
Next I created a new ASP.NET Web Application and added an ADO.NET Data Service and an Entity Data Model just like how I showed in this post. (You will need Visual Studio 2008 Service Pack 1 in order to get these new item templates.) For testing we set the service to allow full access to all the entities in the model — we’ll lock it down later. I also am passing detailed errors which we won’t want to do once we’re in production:
Public Class Northwind Inherits DataService(Of NorthwindEntities) ' This method is called only once to initialize service-wide policies. Public Shared Sub InitializeService(ByVal config As IDataServiceConfiguration) config.SetEntitySetAccessRule("*", EntitySetRights.All) config.UseVerboseErrors = True End Sub
One thing we did want to do is set up our data model so that it enforced constraints (i.e. there cannot be an Order without a Customer) but since some of our legacy data didn’t specify all of these constraints we made the changes to the model instead, so that the integrity on all new data would be enforced through the service. This is often the case in projects, you cannot change the legacy databases but you still need to work with proper data models. So we changed the EDM so that all the entities were singular and not plural (Customer instead of Customers, Order instead of Orders, etc). We also changed the associations so that they were enforced and so that one to many collections were plural and the one-to-one were singular (i.e. Order has Order_Details collection and Order_Detail has Product reference). You can modify these from the Properties window of the Entity Data Model Designer.
Once I have the model and the data service code set up we can hit F5 and navigate our browser to the Northwind.svc and test the call to pull up all the customers (i.e. http://localhost:1234/Northwind.svc/Customers) just like how I showed in this post.
Now that we have our data exposed as a data service we can build the Office clients to interact with it just like I showed before here when we built a simple Excel client. Next post I’ll show how we can use WPF controls in an Outlook Add-In in order to display the customer order history by querying the data through the data service.
Until next time…
|
OPCFW_CODE
|
I am applying the first patch 9239089
But i got error below:
collect2: ld returned 1 exit status
make: *** [/u02/apptest4/TEST4/apps/apps_st/appl/ad/12.0.0/bin/adwrknew] Error 1
Done with link of ad executable 'adwrknew' on Sat Jul 1 07:20:01 PHT 2017
Relink of module "adwrknew" failed.
See error messages above (also recorded in log file) for possible
reasons for the failure. Also, please check that the Unix userid
running adrelink has read, write, and execute permissions
on the directory /u02/apptest4/TEST4/apps/apps_st/appl/ad/12.0.0/bin,
and that there is sufficient space remaining on the disk partition
containing your Oracle Applications installation.
Done with link of product 'ad' on Sat Jul 1 07:20:01 PHT 2017
adrelink is exiting with status 1
End of adrelink session
Date/time is Sat Jul 1 07:20:01 PHT 2017
Line-wrapping log file for readability ...
Done line-wrapping log file.
Original copy is /u02/apptest4/TEST4/apps/apps_st/appl/admin/TEST4/log/adrelink.lsv
New copy is /u02/apptest4/TEST4/apps/apps_st/appl/admin/TEST4/log/adrelink.log
An error occurred while relinking application programs.
Continue as if it were successful [No] :
How to fix relinking error?
Can I ignore it as Yes? and proceed to other patches?
As your applmgr user for the TEST instance, try to touch a file in /u02/apptest4/TEST4/apps/apps_st/appl/ad/12.0.0/bin. It is saying it cannot write files there. Check ownership of $APPL_TOP and $AD_TOP/bin.
You cannot ignore this.
To answer your question about merging, you must not merge an AD minipack or AD.B.Delta patch with any other patch. You could possibly merge the last 3 but I have never done it that way. These last 3 patches are pretty small and only take a few minutes to apply.
But I am teaching merge patch to students, I need a working good example
But now even my sample test apply single patch have lots of errors
I do not know what is happening to this server.
I am embarrassed to my students, I can not resolve the error.
You are looking at the wrong way!
First, if you want to upgrade 12.1.1 to 12.1.3, follow the 12.1.3 README and apply all prerequisities 1st.
Remember you cannot merge ATG and Non-ATG patches such as AD and AP together.
Oracle E-Business Suite Release 12.1.3 Readme (Doc ID 1080973.1)
Apply all the prequisites like Product Patches together using admrgpch which will not be a problem.
Let me know if you have any doubts.
I will recheck my test environment why I got to many errors ...brb
|
OPCFW_CODE
|
In Looker Studio, there are two main entities :
A report allows you to display charts and tables with data contained in a data source.
We will see today how to create a data source in Looker Studio, and then, in the next course, we'll see how to create a report and add your data source.
To create a data source, you need to use a connector :
You can create a data source by using a connector. There is 2 type of connector :
These are connectors for mostly Google tools, such as Google Ads, Google Sheets, Google Analytics, Youtube Analytics… but you can also find Amazon Redshift, Microsoft SQL Server, PostgreSQL, BigQuery…
There is 24 Google connector today, and these connectors are free to use.
These connectors are for other tools like Facebook Ads, Microsoft Ads, LinkedIn Ads, TikTok Ads, TikTok Profile, and many more. They are developed by the community (like Catchr), and Google checks and validates them before making them visible in their store.
As of today (January 2023), there are 688 connectors developed by the community. The complete list of Catchr connectors is available here.
Data Sources are the gateway to your data. It is the connection between Looker Studio and the tools you need for the data.
To create one, click on Create → Data Source on the homepage, select a connector, pass through the different steps of authorization, and choose your account. There can be a bunch of options, don't worry, you can modify it later.
Congrats, your data source has been created! You can find it in your data source list on the homepage.
You can start creating your report right now with this tutorial.
If you want to learn more about this page, here's more information :
This is your data source page configuration :
You can configure a lot of things here:
You can change the name just by clicking on it. It will be changed in every report this data source is used and in the data source list. You should always rename your data source in order to find them easily.
Data Credential: you can check which identifiers are used to access the data. You can also change this configuration to use the viewer's credential. That means every report viewer using this data source must have access to the tools linked to the data source.
This indicates the rate of data refreshes. It is set by default to 12 hours, but a lot of connectors (Google and Partner) used a live connection. For example, all of Catchr's connectors have a live link; every time you refresh a report, you get new data.
Community Visualisations Access: you can disallow the editor of the report to use community visualization (special charts for your report). These are charts created by the community. If used when this option is not allowed, they will only show an error message. You must do that to prevent data transit through the community chart creator.
Available by default, you can disable this options to prevent report editor to make any modification in type or aggregation method for this data source's fields.
You can create an exact copy of your data source at any time. Any modification will be copied as well (new custom field, new name or type for existing field)
Create a report: create a blank report containing these data sources.
Explore: create an explorer with this data source. It can be useful if you only need to search information through data and do not want to create a full report.
Add a field: you can create custom fields easily with a custom formula. To know more about this, follow this link.
You can create a parameter to allow the viewer to play with your data (usually combined with a custom field). More information is available here.
Now that your data source is created, learn how to use it in a report in this tutorial !
|
OPCFW_CODE
|
NavCoin Core 4.5.0
NavCoin Core 4.5.0 has been released today which marks the last significant update to NavCoin Core of 2018. This release includes two important consensus soft forks which will bring greater security to the network and to individuals funds; Cold Staking and Static Rewards.
Described in NPIP0002, Cold Staking will allow transactions to be associated with two private keys instead of one. One key is used for spending while the other can only be used for staking, therefore offering increased security of user funds by allowing them to keep the spending key offline and not in their hot staking wallet.
Described in NPIP0004, Static Rewards will change the block reward from being calculated as an annual percentage of the staking input, to always being 2 NAV per block. Total annual inflation will remain around 4% but individual rewards will increase to around 10% per annum depending on the total weight of the network. To claim the maximum staking rewards, wallets need to remain constantly online and securing the blockchain with their network weight.
Why and How to Vote
It is hoped that both of these soft forks attract new stakers to the NavCoin network, therefore increasing network security even further.
Both soft forks are signalling YES by default in version 4.5.0 so all you need to do is upgrade and you will be signalling for them automatically. If you wish to vote against them, it’s recommended that you still upgrade to 4.5.0 but then reject version bits 3 and/or 15 in your config file.
Dynamic CFund Quorum
With the Community Fund in operation for about a month now, we’ve had some time to see what kind of participation levels in proposal voting the network will engage in. Most of the proposals have got significantly more than 75% yes votes, but the quorum is often just falling short of the 50% required threshold.
NavCoin Core have implemented an opt-in soft fork which would keep the quorum at 50% for the first 3 voting periods, then reduce the quorum to 40% for the last 3 voting periods if consensus has not been reached to help proposals pass.
This soft fork is signalling NO by default in version 4.5.0, so if you want to vote for it you need to accept version bit 17 in your config file.
CFund Voting Interface
NavCoin Core now comes fully equipped with a graphical interface to view and vote on community fund proposals and payment requests. There is also a notification which will appear when a new proposal or payment request is found on the blockchain.
Block Header Spam Protection
The wallet will now rate-limit the amount of block headers received from a single peer before banning them for misbehaving. This is an anti-spam measure to prevent malicious nodes from attempting to flood the network.
Full Release Notes
Beyond these keynote features, there has been some additional CFund RPC commands added and a stack of GitHub issues resolved. To see the entire list of changes, please visit the 4.5.0 release notes on GitHub.
You can download NavCoin Core 4.5.0 from the downloads section or directly from the Github release.
The Windows Installers are now available again for anyone who has been having issues with the zipped exe released as part of 4.4.0. Please note that the checksum for the windows installers is valid for the published files. However due to the cross platform build being non deterministic if you build the installer yourself from source, you will most likely end up with a different checksum.
|
OPCFW_CODE
|
import getLogger from '@whitetrefoil/log-utils'
interface InStorage {
t?: string
h?: Record<string, Record<string, number>>
}
const { debug, warn } = getLogger(`/src/${__filename.split('?')[0]}`)
export const getInStorage = (): InStorage => {
const inStorage: InStorage = {}
const inStorageJson: string|null = window.localStorage.getItem('whitetrefoil-checkin-temp')
if (inStorageJson != null && inStorageJson !== '') {
try {
const parsed = JSON.parse(atob(inStorageJson)) as InStorage|null|undefined
inStorage.t = parsed?.t
inStorage.h = parsed?.h
} catch (e: unknown) {
warn('Failed to parse cache in localStorage, will reset. Reason:', e)
window.localStorage.removeItem('whitetrefoil-checkin-temp')
}
}
return inStorage
}
export const updateStorage = (updater: (val: InStorage) => InStorage) => {
const prev = getInStorage()
const next = updater(prev)
debug('Update Storage:', next)
window.localStorage.setItem('whitetrefoil-checkin-temp', btoa(JSON.stringify(next)))
}
|
STACK_EDU
|
The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
Installing VMware Tools in a Solaris virtual machine (1023956)
Note: For an overview of installing VMware Tools, see Overview of VMware Tools (340).
- Within the vSphere Client, ensure that your Solaris virtual machine is powered on.
- If you are running a GUI interface on the Solaris virtual machine, open a command shell.
Note: Log in as a root user, or use the sudo command to complete each of these steps.
- In the vSphere Client, click VM in the virtual machine menu.
- Click Guest > Install/Upgrade VMware Tools and click OK.
- In the Solaris virtual machine, copy vmware-solaris-tools.tar.gz from /cdrom/vmwaretools to a temporary directory (/tmp/).
- The /cdrom folder may need to be created prior to continuing.
- If the CD-ROM device is not mounted, run these commands and then attempt to copy the file:
sudo svcadm enable -r volfs
- Decompress the file using gunzip command. For example:
# gunzip vmware-solaris-tools.tar.gz
- Extract the contents of the tar file with the command:
# tar xvf vmware-solaris-tools.tar
- Change directory using the command:
# cd vmware-tools-distrib
- To install the VMware Tools, run this command from the directory that you changed in step 7:
- Press Enter to accept all of the default values.
- Reboot the virtual machine for the changes to take effect.
- Check if VMware tools service is running with the command:
# /etc/init.d/vmware-tools status
You see the output similar to:
vmtoolsd is running
- Add vmware-toolbox to the list of startup commands of your desktop.
If you are using Java Desktop System, Release 3:
- Go to Launch > Preferences > Desktop Preferences > Sessions.
- Click Startup Programs tab and add these entries:
Note: If your problem still exists after trying the steps in this article, please file a support request with VMware Support and note this Knowledge Base article ID (1023956) in the problem description. For more information, see How to Submit a Support Request.
For more information on:
- VMware Tools versions, see Determine the version of VMware Tools running in a Solaris virtual machine (2030805).
- The versions of Solaris that are supported on which versions of ESX or ESXi, see the VMware Compatibility Guide.
- VMware Tools installation information, see General VMware Tools installation instructions (1014294).
- Ejecting the CD-ROM, see Solaris 10 guest cannot eject ISO image mounted as CD-ROM (1012986).
Request a Product Feature
To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.
|
OPCFW_CODE
|
Students don't really need to deal an added instructional selling price tutors who may manage their assignments endeavors; their complete assignments difficulty will be tackled by our glorious Computer Science Assignment Help solutions within a short timeframe.
First time I got discovered through the lecturers in the class of one hundred pupils that much too in a good way. Sure, every time a twisted problem was set up with the instructors for all The scholars, no one arrived forward to solve the given concern. But following some minutes gathering all my energy and self confidence, I move ahead and solved the issue.
entrepreneurship organization administration asset management company interaction behaviour administration Global organization leadership business enterprise management marketing and advertising marketing investigate risk administration MBA assignment e marketing and advertising Small business worldwide marketing Intercontinental banking industrial relations operations management organizational habits overall high-quality administration project management hr circumstance examine Accounting and Finance Australian taxation process managerial accounting statistics econometrics economics Corporate Accounting monetary accounting accounting finance Auditing Assignment Help
All our assignments are organized by verified computer science industry experts, who have understanding in numerous fields relevant to computer programing, networking and much more. This ensures that the quality of our assignment is leading and matches your standard of expectation.
The Product papers furnished by Bestassignmentexperts.com for school students for reference only and advised never to be submitted in initial as these papers are alleged to be employed for Assessment and reference capabilities only.
Even the hardest and difficult Computer Science assignment questions might be solved by our on line authorities within handful of hrs.
Java is usually a programming language, which is characterised by a class-based, item-oriented and concurrent implementation. The programming is meant to build applications with a WORA (create-once operate any place) foundation which means that a compiled Java code can website link successfully operate on all platforms with no currently being recompiled. It truly is the preferred programming language and essentially the most-fitted to Android programming. However, establishing Java applications is often an intimidating expertise for many learners.
Expert services Assignment Help Assignment Services low-priced assignment help case examine assignment help my assignment help do my assignment eviews assignment help address my assignment literature assignment help buy my assignment literature evaluation make my assignment enhancing providers tafe assignment help minitab assignment help m additionally assignment help media microeconomics mass conversation assignment author Assignment Help
Why On the net assignments help company from AllAssignmentHelp effective? Allassignmenthelp incorporates a crew with abilities and practical experience in tutorial projects. Our staff has industry experts with related field expertise, who're focused on helping students with their homework. We work on the elemental of ASAP, which means Affordability, Plagiarism free of charge Option, Availability, and Professionalism. We're a staff of experts who tries to help you with every single educational Examine. 1. Our Expert tutors often get the job done in sync with the necessities specified to us, and this makes our assignment Alternative a great just one.
In organizations, You will find a higher demand from customers of firm computers. Thus, the desire of proficient Computer science is current in every single place.
We can offer satisfactory guidance in building an Android app. Our workforce of programming industry experts is kind of effective in providing programming web link help on Android app designing according to your want.
We also give the stories and in try this site addition ensure Remarkable assignments which happen to be Remarkable in mother nature. It also can contain referencing as a way to gratify the prerequisites of all understudies.
Computer science coursework help: We offer outstanding support on your computer science program to be able to see an experienced presentation of queries and answers for the whole syllabus of computer science.
The writing firms are viewed as a responsible Good friend of the students because it will come for a savior Anytime The scholars are in need to have.
|
OPCFW_CODE
|
Click the Text Documents drop-down menu at the bottom-right corner of the window, then click All Files. Because of all the available options for each Linux variants we’ve given this section a page of its own in the following link. In her daily life, Vega enjoys hanging out with friends, shopping online and listening to music. To disable hyperlinks, go to the Preferences option under the Settings tab.
- We are harder to find because we don’t do search engine optimization so make sure you save this address if you wish to come back later.
- Transfer the save folder to your http://www.sup-garage.de/exploring-the-limitations-of-notepad-for-folder PC/Android device save folder via FTP or USB with Vita Shell.
- If it has the 〉 right-arrow chevron, clicking that will “unfold” that level, so that it will show the files and directories under that directory.
- If you want to convert all items of a list to a string when writing then use the generator expression.
WordPad is another built-in word processing application in Windows with many additional formatting functions. Using Process.Start is a potential command injection, whereby an attacker could substitute MyTextFile.txt for MyMalicious.bat or fdisk …. Better to use Process.Start(«notepad.exe», filename). In this section, we will learn how to save, compile, and run a Java program in Command Prompt using notepad.
Open file using CMD in Notepad++
I’d like to do this in a tool that generates text which is supposed to be copied and pasted into another text file. The command line can be used to open a text file in Linux. My File can be accessed by typing cat myFile into the Desktop. You can print the contents of the file using the command line txt. There is another reason why this application gets into the problem when opening large files. The large the file is, the more is the required memory.
Search for Password hash and change it to no password. Click on the newly saved file and open it with Notepad. PS – Looking to remove a password from a Word document? Check out our guide to removing a password from MS word with examples.
You can use Node’s process object to access them. The Node.js interpreter read the file and executed console.log(«Hello World»); by calling the log method of the global console object. The string «Hello World» was passed as an argument to the log function. In the context of Node.js, streams are objects that can either receive data, like the stdout stream, or objects that can output data, like a network socket or a file. In the case of the stdout and stderr streams, any data sent to them will then be shown in the console. One of the great things about streams is that they’re easily redirected, in which case you can redirect the output of your program to a file, for example.
|
OPCFW_CODE
|
National Groundwater Conditions Web Application
Data and Resources
National Groundwater Conditions HTML
This application provides a snapshot of current groundwater levels across the...
|USGS Computational Tools
|Data Collection Frequency
|Current and historical groundwater data are fetched from the National Water Information System (NWIS) web service, and the National Ground-Water Monitoring Network (NGWMN) web service. Non-USGS NGWMN sites are assumed to be regularly monitoring groundwater levels, while only NWIS sites within the Active Groundwater Level Network (see 'aw' outputDataTypeCd), which includes sites with a water level measurement taken at least once within the past 13 months, are displayed in this application. Some exceptions apply. For example, sites part of an occasional data collection program, or sites for which there is pending funding for future measurements may be shown. Similarly, sites for which there was formerly funding for data collection may still be shown in the Active Groundwater Level Network. NWIS data is gathered using the dataRetrieval R package. 'Discrete' water level measurements are collected using the dataRetrieval::readNWISgwl function, which queries the NWIS USGS Groundwater Levels Web Service , a service that provides historical manually-recorded groundwater levels. For this query, the USGS parameter code '72019' is used, which corresponds to 'Depth to water level, feet below land surface' per the Groundwater Levels Web Service Documentation. 'Continuous' water level measurements are obtained from NWIS using the dataRetrieval::readNWISdv function, which queries the USGS Daily Values Site Web Service. When collecting these data, the '72019' parameter code is also used. For these data, both the 00001 (maximum) and 00003 (mean) Statistics Codes are collected. These data are compared, and the statistic code with more data is used for the subsequent percentile calculations. This step is taken as different regions measure and record their water depth data differently, with some using the maximum statistic code, and others making use of the mean statistic code. In the event of a tie (both data records have the same number of entries), the mean values are used. National-scale NWIS site percentile values are computed daily using the precompute package. Statistics computations, tables, and plots are computed using the HASP package. Data is fetched from web services using the dataRetrieval package. For the non-USGS NGWMN sites, the precompute package is used to request a shapefile with the latest water-level percentile information from the NGWMN itself, per instructions in this tip sheet. As a consequence, the percentile values for non-USGS NGWMN sites are calculated by NGWMN, per their statistics methods documentation. Similar to the NGWMN criteria, the NWIS data at a site must have at least 10 years of data for the given month in order to be given a percentile rank. Sites with a historical record shorter than 10 years for a particular month are not ranked at this time. Note: Data shown on the national map are refreshed daily, and may not reflect the most recently collected values. In contrast, the individual site pages (accessed via link in the site pop-up, or from the summary table below the map) do access and display the latest and most up-to-date information for a given site.
|Data Publishing Method
|Data Publishing Frequency
|National Water Information System (NWIS) web service and the National Ground-Water Monitoring Network (NGWMN) web service
|27 November 2023
|27 November 2023
|
OPCFW_CODE
|
That's very much intentional. We try to limit everything we do in Stash so that we don't start running into performance problems when user data is unexpectedly large.
In this particular case there is a global setting that you can set in the $STASH_HOME/stash-config.properties file.
EDIT: Apologies I think the limit you want is the following, although the default should be 500, not 100.
I would be a very careful about making this too large.
This is a problem because the service receiving the POST must know to reach back to Stash and make potentially multiple requests. There are many race conditions that could occur not to mention network partition-related problems. The data should be provided in one payload period for webhooks. For the web UI, this is fine -- I would want it paged. But not for webhooks.
It's unclear why you would have performance-related problems. 100 changes in a single commit is not that much to process (now a person pushing that many commits at once should re-think what they're doing -- but during a large merge it could be feasible). For the webhook I'd prefer to disable the limit.
As for the performance problem, why not stream the data out? You can use a constant amount of memory if architected correctly.
That's not a feasible solution for many webhook consumers. Yes you could have a network partitioning problem whenever you have 2 machines communicating over an unreliable medium (the network), but I was referring to the fact that the issue is exacerbated when multiple calls are required. That also requires the consumer to know something about the producer (my service needs to know how to formulate a query to get the next page of info. -- speaking of that, how is that done?).
Following your suggestion requires much more work, plus additional storage for each repository, plus having to issue git commands and/or using additional dependencies in order to properly discover and process the necessary information. And all of that still suffers from network partitioning problems.
At the very least, can I specify "plugin.com.atlassian.stash.plugin.hook.changesLimit=-1" to indicate I want "no limits whatsoever"?
Honestly, I wouldn't be relying on the webhook data like that. As you mention you might get network partitioning problems, and then lose the data. If it were me I would use the webhook to see what refs are changing, and then actually fetch the latest data with Git. Then you can guarantee you never miss anything. You could also poll to handle missed hook calls.
Otherwise just make that limit really large...
To page more results you could use the Stash REST API:
This endpoint can take since/until for the commit range, as well as start/limit query parameters for paging.
Unfortunately it doesn't look like our common paging API will allow -1 as a limit, it will default to a minimum of 1.
PS. See my edit in the 'answer' above, I may have given you the wrong property before.
So I would need to call back into the REST API? All of this is starting to sound extremely silly to me. None of what I'm hearing seems consistent with what I know and am familiar with regards to webhooks (certainly happy to be educated, though, if this is what goes on elsewhere). :D
To get around the API limitation you could always loop through the pages and stream the data a page at a time.
Perhaps I should simply fork the webhook plugin and make the changes I need? Would you be able to point me to where the source lives for the plugin?
Hello! My name is Mark Askew and I am a Premier Support Engineer for products Bitbucket Server/Data Center, Fisheye & Crucible. Today, I want to bring the discussion that Jennifer, Matt, and ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs
|
OPCFW_CODE
|
using Selenium.Core;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Runtime.InteropServices;
using System.Text;
namespace Selenium {
/// <summary>
/// The user-facing API for emulating complex user gestures. Use this class rather than using the Keyboard or Mouse directly. Implements the builder pattern: Builds a CompositeAction containing all actions specified by the method calls.
/// </summary>
/// <example>
/// <code lang="vbs">
/// Set ele = driver.FindElementById("id")
/// driver.Actions.ClickDouble(ele).SendKeys("abcd").Perform
/// </code>
/// </example>
[ProgId("Selenium.Actions")]
[Guid("0277FC34-FD1B-4616-BB19-FB8601D6B166")]
[Description("User-facing API for emulating complex user gestures.")]
[ComVisible(true), ClassInterface(ClassInterfaceType.None)]
public class Actions : ComInterfaces._Actions {
const char KEY_SHIFT = '\xE008';
const char KEY_CTRL = '\xE009';
const char KEY_ALT = '\xE00A';
delegate void Action();
private RemoteSession _session;
private Mouse _mouse;
private Keyboard _keyboard;
private List<Action> _actions;
private bool _isMouseDown = false;
private bool _isKeyShiftDown = false;
private bool _isKeyCtrlDown = false;
private bool _isKeyAltDown = false;
internal Actions(RemoteSession session) {
_session = session;
_mouse = session.mouse;
_keyboard = session.keyboard;
_actions = new List<Action>();
}
/// <summary>
/// Performs all stored Actions.
/// </summary>
public void Perform() {
//perform actions
foreach (var action in _actions)
action();
//release the mouse if the state is down
if (_isMouseDown)
_mouse.Release();
_isMouseDown = false;
//release the keyboard if the modifiers keys are pressed
var modifiers = new StringBuilder(10);
if (_isKeyShiftDown)
modifiers.Append(KEY_SHIFT);
if (_isKeyCtrlDown)
modifiers.Append(KEY_CTRL);
if (_isKeyAltDown)
modifiers.Append(KEY_ALT);
if (modifiers.Length != 0) {
_keyboard.SendKeys(modifiers.ToString());
_isKeyShiftDown = false;
_isKeyCtrlDown = false;
_isKeyAltDown = false;
}
}
/// <summary>
/// Waits the given time in millisecond.
/// </summary>
/// <param name="timems">Time to wait in millisecond.</param>
/// <returns>Self</returns>
public Actions Wait(int timems) {
_actions.Add(() => SysWaiter.Wait(timems));
return this;
}
/// <summary>
/// Clicks an element.
/// </summary>
/// <param name="element">The element to click. If None, clicks on current mouse position.</param>
/// <returns>Self</returns>
public Actions Click(WebElement element = null) {
_actions.Add(() => act_click(element));
return this;
}
/// <summary>
/// Holds down the left mouse button on an element.
/// </summary>
/// <param name="element">The element to mouse down. If None, clicks on current mouse position.</param>
/// <returns>Self</returns>
public Actions ClickAndHold(WebElement element = null) {
_actions.Add(() => act_mouse_press(element));
return this;
}
/// <summary>
/// Performs a context-click (right click) on an element.
/// </summary>
/// <param name="element"> The element to context-click. If None, clicks on current mouse position.</param>
/// <returns>Self</returns>
public Actions ClickContext(WebElement element = null) {
_actions.Add(() => act_click_context(element));
return this;
}
/// <summary>
/// Double-clicks an element.
/// </summary>
/// <param name="element">The element to double-click. If None, clicks on current mouse position.</param>
/// <returns>Self</returns>
public Actions ClickDouble(WebElement element = null) {
_actions.Add(() => act_click_double(element));
return this;
}
/// <summary>
/// Holds down the left mouse button on the source element, then moves to the target element and releases the mouse button.
/// </summary>
/// <param name="elementSource">The element to mouse down.</param>
/// <param name="elementTarget">The element to mouse up.</param>
/// <returns>Self</returns>
public Actions DragAndDrop(WebElement elementSource, WebElement elementTarget) {
_actions.Add(() => act_drag_drop(elementSource, elementTarget));
return this;
}
/// <summary>
/// Holds down the left mouse button on the source element, then moves to the target element and releases the mouse button.
/// </summary>
/// <param name="element">The element to mouse down.</param>
/// <param name="offset_x">X offset to move to.</param>
/// <param name="offset_y">Y offset to move to.</param>
/// <returns>Self</returns>
public Actions DragAndDropByOffset(WebElement element, int offset_x, int offset_y) {
_actions.Add(() => act_drag_drop_offset(element, offset_x, offset_y));
return this;
}
/// <summary>
/// Sends a key press only, without releasing it. Should only be used with modifier keys (Control, Alt and Shift).
/// </summary>
/// <param name="modifierKey">The modifier key to Send. Values are defined in Keys class.</param>
/// <param name="element">The element to Send keys. If None, sends a key to current focused element.</param>
/// <returns>Self</returns>
public Actions KeyDown(string modifierKey, WebElement element = null) {
_actions.Add(() => act_key_down(modifierKey, element));
return this;
}
/// <summary>
/// Releases a modifier key.
/// </summary>
/// <param name="modifierKey">The modifier key to Send. Values are defined in Keys class.</param>
/// <returns>Self</returns>
public Actions KeyUp(string modifierKey) {
_actions.Add(() => act_send_modifier_key(modifierKey));
return this;
}
/// <summary>
/// Moving the mouse to an offset from current mouse position.
/// </summary>
/// <param name="offset_x">X offset to move to.</param>
/// <param name="offset_y">Y offset to move to.</param>
/// <returns>Self</returns>
public Actions MoveByOffset(int offset_x, int offset_y) {
_actions.Add(() => act_mouse_mouve(offset_x, offset_y));
return this;
}
/// <summary>
/// Moving the mouse to the middle of an element.
/// </summary>
/// <param name="element">The element to move to.</param>
/// <returns>Self</returns>
public Actions MoveToElement(WebElement element) {
_actions.Add(() => act_mouse_mouve(element));
return this;
}
/// <summary>
/// Releasing a held mouse button.
/// </summary>
/// <returns>Self</returns>
public Actions Release([MarshalAs(UnmanagedType.Struct)]WebElement element = null) {
_actions.Add(() => act_mouse_release(element));
return this;
}
/// <summary>
/// Sends keys to an element.
/// </summary>
/// <param name="keys">Keys to send</param>
/// <param name="element">Element to Send keys. If None, Send keys to the current mouse position.</param>
/// <returns>Self</returns>
public Actions SendKeys(string keys, WebElement element = null) {
_actions.Add(() => act_send_keys(keys, element));
return this;
}
#region private support methods
private void act_click(WebElement element) {
act_mouse_mouve(element);
act_mouse_click();
}
private void act_click_context(WebElement element) {
act_mouse_mouve(element);
act_mouse_click_context();
}
private void act_click_double(WebElement element) {
act_mouse_mouve(element);
act_mouse_click_double();
}
private void act_drag_drop(WebElement elementSource, WebElement elementTarget) {
act_mouse_press(elementSource);
act_mouse_release(elementTarget);
}
private void act_drag_drop_offset(WebElement element, int offset_x, int offset_y) {
act_mouse_press(element);
act_mouse_mouve(offset_x, offset_y);
act_mouse_release(null);
}
private void act_key_down(string modifierKey, WebElement element) {
if (element != null) {
act_mouse_mouve(element);
act_mouse_click();
}
act_send_modifier_key(modifierKey);
}
private void act_send_keys(string keys, WebElement element) {
if (element != null) {
act_mouse_mouve(element);
act_mouse_click();
}
act_send_keys(keys);
}
private void act_send_modifier_key(string modifierKey) {
if (modifierKey.Length == 0)
throw new Errors.InvalideModifierKeyError();
char c = modifierKey[0];
if (c != KEY_ALT && c != KEY_CTRL && c != KEY_SHIFT)
throw new Errors.InvalideModifierKeyError();
act_send_keys(modifierKey);
}
private void act_send_keys(string keys) {
foreach (char c in keys) {
if (c == KEY_SHIFT)
_isKeyShiftDown ^= true;
if (c == KEY_CTRL)
_isKeyCtrlDown ^= true;
if (c == KEY_ALT)
_isKeyAltDown ^= true;
}
_keyboard.SendKeys(keys);
}
private void act_mouse_mouve(WebElement element) {
if (element != null)
_mouse.moveTo(element);
}
private void act_mouse_mouve(int offsetX, int offsetY) {
_mouse.MoveTo(null, offsetX, offsetY);
}
private void act_mouse_click() {
_mouse.Click();
}
private void act_mouse_press(WebElement element) {
if (element != null)
_mouse.moveTo(element);
_mouse.ClickAndHold();
_isMouseDown = true;
}
private void act_mouse_release(WebElement element) {
if (element != null)
_mouse.moveTo(element);
_mouse.Release();
_isMouseDown = false;
}
private void act_mouse_click_double() {
_mouse.ClickDouble();
}
private void act_mouse_click_context() {
_mouse.Click(MouseButton.Right);
}
#endregion
}
}
|
STACK_EDU
|
Setting Administrator password(The easy way)
Forgot Windows 10 Local Administrator Password? Remove with Command Prompt
Even if you’re new to text commands on Windows, changing the user password with the net user command is simple.
We’ll show you how to change a Windows password using the command line with this handy method. To change another Windows user’s password via the command line, you need administrator privileges. See how to get admin rights on Windows if you aren’t using an admin account already.
Also, keep in mind that this method only works for local accounts in Windows 10 and It won’t work if you use a Microsoft account to sign in to Windows ; you’ll need to change the password using Microsoft’s web account management page instead. See our guide to securing your Microsoft account for help with this and other security measures. If you see an Access denied message when you try this, make sure that you started the Command Prompt or other command line window as an Administrator.
If you are a system administrator and responsible to manage Windows operating system then you must know about how to reset Windows administrator account password. In some cases, you may need to change or reset the administrator password on Windows 10 operating system.
There are several ways you can reset or change the administrator password on Windows In this tutorial, we will explain how to reset or change the administrator password using command line interface. Windows 10 allows you to change the password of any account using the command-line interface. If you want to change the old password of administrator account follow the below steps:. Open Start menu, search for Command Prompt, right-click on the search result and select the Run as administrator option to open the command-line interface.
Microsoft Windows [Version All rights reserved. Once you are completed, sign out and from your system and sign back in to start using the new password.
However, you must have a Microsoft account for logging into Windows 10 and have access to the alternate email account or phone number you provided when signing up the Microsoft account.
However, this tool is not free. Did this summary help you? Yes No. Log in Social login does not work in incognito and private browsers. Please log in with your username or email to continue. No account yet? Create an account. Popular Categories. Arts and Entertainment Artwork Books Movies.
Cookie Settings. Learn why people trust wikiHow. Download Article Explore this Article methods. Related Articles. Article Summary. Method 1. Understand the different types of administrator accounts. Windows creates a disabled Administrator account automatically in all versions of Windows after XP. This account is disabled for security reasons, as the first personal account you create is an administrator by default. The following method will detail enabling the disabled Administrator account and then setting a password for it.
If you want to change your personal administrator account’s password, open the Control Panel and select the “User Accounts” option. Select your personal administrator account and then click “Create a password” or “Change your password”. Press the. You should see “Command Prompt” appear in the list of search results. Right-click on “Command Prompt” and select “Run as administrator”.
This will enable the Administrator account on the computer. The most common reason for activating the Administrator account is to perform automation work without having to deal with the User Access Control message appearing every time a system setting is changed.
This will allow you to change the Administrator password. Type the password you want to use. Characters will not appear as you type. Retype the password to confirm it. If the passwords do not match, you’ll have to try again. This will disable the Administrator account.
Windows 10 set administrator password command line free. Local admin account password – Windows 10
Change Old Administrator Password on Windows 10 Open Start menu, search for Command Prompt, right-click on the search result and select. Part 1: How to Reset Lost Password on Windows 10 with Command Prompt when Computer Is. Open up Command Prompt. #2. type in “net user Administrator (Password goes here)” (Without the quotes, And put in a password in place of the “(Password goes.
Windows 10 set administrator password command line free
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. AdministratorPassword specifies the administrator password ckmmand whether it is hidden in the unattended installation answer file. Xdministrator enables the built-in administrator account, even if a продолжение здесь is not specified in the Windows 10 set administrator password command line free setting. If no values are set for the administrator password and Username is not set to Administratorthe administrator account is disabled.
Both of these settings should be added to the auditSystem configuration pass. For Windows Serverif you run the sysprep command with the generalize option, the passaord administrator account can no longer access any Encrypting File System EFS windows 10 set administrator password command line free files, personal certificates, and stored passwords for websites or network resources.
The built-in Administrator must have a страница and that password must be changed at first logon. Administgator will prevent the built-in Administrator account from having a blank password by default. The default password policy requires the creation of a complex password for all user accounts.
During installation of Windows Перейти на страницуSetup prompts you to configure a complex password. Attempting to configure a non-complex password, either manually or by using a script, such as the net command, will fail. The next time the commane starts, you, or the end user, are prompted for a password. You can automate configuration of the password by creating an answer file to use with Sysprep that co,mand a value for the Microsoft-Windows-Shell-Setup UserAccounts AdministratorPassword windows 10 set administrator password command line free Setup setting.
OEMs and system builders are required to retain the default password policy of their Windows Server подробнее на этой странице. However, corporate customers are permitted to change посетить страницу default password policy.
A corporate customer can configure a non-complex password for the built-in administrator account during an unattended installation by specifying the desired value for Microsoft-Windows-Shell-Setup UserAccounts AdministratorPassword.
For a list of the supported Windows editions and architectures that this component supports, see Microsoft-Windows-Shell-Setup. Skip to main content. This browser is no longer supported. Download Microsoft Edge More info. Table of contents Exit focus mode.
Table of contents. Warning Creating a blank administrator password is a security risk. Note For Windows Serverif you run the sysprep command with the generalize option, the built-in administrator account can no longer access any Encrypting File System EFS -encrypted files, personal certificates, lie stored passwords for websites or network resources.
In this cimmand. Specifies whether the AdministratorPassword is hidden in the unattended installation answer file.
|
OPCFW_CODE
|
Git is a distributed source code control system. It keeps track of modifications added to the source code, and it has functions such as version control function which restores the code of any given time, merge function which incorporates modifications added to the source code by other developers to your own source code, clone functions which copies the source code released by other developers to your working environment, and many other functions that are useful to software development.
Git is a source code control system for development, created by Linus Torvalds who developed Linux. They have a huge amount of source codes, and in order to make itself available to the large-scale software development like Linux where many developers are involved, it is designed to be speedy in operation, easy to coordinate cooperation among developers, and easy in doing development.
At Git, in order to help development with multiple developers involved go smoothly, they adopt architecture which is called “distributed type”. In a distributed source code control system, repository (local repository) gets created within the directory where developers do their work, and synchronizes with other repository as required. Because a repository is provided for each developer/machine, it allows developer to work on the development with minimum interference from other developers.
Changes that are added to the source code by a developer can be sent to other repository (remote repository) through a structure called “push” or “pull”. Push is used to send incremental information between remote repository and local repository to the remote repository, and by using this, you can synchronize between the remote repository and the local repository. Pull is used to bring incremental information between remote repository and local repository. You can use this when you want to apply the changes added to the source code by other developers to the source code which you’re working on.
Also, creating local repository in order to take out a file from a certain remote repository and add changes to the file and the directory in your own space is called clone. The most general development style is that when you want to add a new change to the source code made by other developer, you use clone to obtain the file, then commit the change and push or pull to send and receive the change.
At OSDN, we provide the following functions to support development which uses Git.
All these functions can be used free of charge once you have an OSDN account. There are no restrictions to the size and number of repositories you can create.
At Development support system PersonalForge, you can use Chamber which provides functions such as personal repository at Git, file up loader, and Wiki. You can make infinite Chambers and upload any files to release.
At PersonalForge, there’s a repository browser originally developed by OSDN. A list of files of any branch, a history of commits, and a list of branches and tags can all be browsed on web.
It is also provided with a function that allows forking from other Git repository, therefore you can create a new repository by forking not only from repositories within OSDN, but also from any released Git repository.
Project function designed for open source software development provides source code control systems. Source code control systems such as Git, Subversin, Mercurial, Bazaar, CVS are available. With Git, Subversion, and Mercurial, we also provide repository browser which we developed originally.
There are also other functions designed for projects, such as Wiki and file release (up loader), ticket, news, forum (bulletin board), and mailing list.
Each development project is provided with a free of charge shell server which can be used for testing softwares and structuring Web sites. You can log into the shell server with SSH or by using MySQL data base. Shell server is installed with commands such as git, svn, and hg, and by using them you can copy files to the shell server.
|
OPCFW_CODE
|
PHP action form is not working. root of the action form shows 404 even when the file is there
It worked perfectly when the same .PHP files (the one with the form and the action one) were inside the some folder. But for security reasons I need the 'action' file outside the folder with another path.
I'm writing the code correctly (I guess):
action="../connect/dashboard/admin/register_dw_user.php"
It doesn't make sense to me, it's clearly that I'm missing something. But I don't know what.
And the connect.php file to link the MYSQL DB is working perfectly, and it's also inside the same 'connect' folder. I'm using:
include '../connect/lib/connect.php';
That made clear that the 'connect' folder is just one above the 'public' one. But for some reason isn't working with the action form.
This is what I tried and didn't worked:
action="/connect/dashboard/admin/register_dw_user.php"
action="../connect/dashboard/admin/register_dw_user.php"
action="../../connect/dashboard/admin/register_dw_user.php"
action="../../../connect/dashboard/admin/register_dw_user.php"
action="connect/dashboard/admin/register_dw_user.php"
Any idea? PLEASE! I need Help!
You should always use absolute pathes...
But in this case what would be the absolute pathes.. is not the: ../../ making it absolute path?
Are you setting any rules in a htaccess file?
For files (include) use DIR constant like include __DIR__ . '/../../some/path'; . Background: relative pathes are relative to the current working directory, which can be different from the directory in which the actual script resides.
@imposterSyndrome yes :
RewriteEngine On
RewriteBase /
ErrorDocument 404 /404.php
You might want to consider using __DIR__
@Remy but how can i use the DIR as an action in a html form?
@LarsStegelitz Ok, thank you! I have a better point of this. But how can I use the DIR as an action in a html form?
Relative URL are relative to the base URL (please search this term on the internet and read it there)
You should always use absolute paths especially when you're accessing files from different folders
For offline usage
This is if you are not even trying to render php,just testing if the submit button calls the right file
For example say your form.htm is located at [Drive]/myname/admin/data/form.htm
and your processor.php is at [SameDrive]/myname/admin/data/anotherdata/processor.php
to call that php file from htm you set your action to action="/myname/admin/data/anotherdata/processor.php"
For localhost or server[if you are on server]
usually the path is either "www/" or "www/root/"
say your form.htm is located at www/root/admin/data/form.htm
and your processor.php is at www/root/admin/data/anotherdata/processor.php
you can now use action="/admin/data/anotherdata/processor.php"
Alternatively
You could add url before the path like:
action="https://myurl.com/admin/data/anotherdata/processor.php"
I tried but it didn't worked..I don't know what im doing wrong..
The path can be found in properties check there....also why not move the files to the same folder so you don't have to worry about the path
Perhaps try adding the URL before the path
|
STACK_EXCHANGE
|
Here’s the scenario: A user closes a subview by clicking on the “X” button its window, messing up your order of rule execution AND the demo of your work to the client.
Not that that really happened (oh yes it did).
Well, I’m not angry about it or anything (oh yes I am).
I spent several hours running through the entire chain of events and possible actions. And all of them worked correctly in EVERY instance — before I thought to test what would happen if the user closed the subview using that blasted “X” button in its window.
And what I found were three things: (1) clicking on that “X” control will abort normal rule execution, which (2) reproduced the behavior we saw during the demo, and (3) the window controls’ events are not captured by K2 SmartForms.
Well. Not without a little help.
Scenario: Deeper Dive
Given a form containing two views — an item view with some controls for actions, and a list view that takes orders from the action view. The item view has rules which call up a third view as a subview, which closes after some work is done.
The subview has no actionable controls, with exception of the standard border controls that appear around any subform or subview. Those controls are a maximize button (resembling a box) and a close button (resembling an “X”) which mimic standard Windows controls.
The rules were essentially that a click of a button on the item view would start some operations which would eventually invoke the subview and terminate in re-initializing the item view, causing the list view to refresh (I’m sort of over-simplifying).
The user closing the subview with that “X” control caused the rule execution to abort, leaving the views in their current state, and giving the client the impression that the operations didn’t work.
So I needed a way for the form to reinitialize the item view when the subview closes.
Now, you can create a rule in the form that listens for the “When a subview is closing” event, but it won’t fire. That’s because the close event, when triggered by the border control, doesn’t have a handler that K2 SmartForms knows about…
…so it’s up to us to make one.
- On the subview, create a hidden button, and create a rule for that button’s click event and assign to it the “Close a subview or subform” action (it’s listed under “Subview Interaction” in the Rule Designer). Save it and check the subview back in.
Now your form has a close event to listen for.
- All that’s left is to create a rule on your form for the event “When a subview is closing,” and assign to it whatever action is supposed to happen in your normal ruleset. In my case, it was to execute the Initialize() method on the action view. Save, check it in, and test.
You should find that the form will now detect the close event triggered by the “X” button and will fire the actions you’ve specified.*
What Have We Done?
Having followed these instructions, I feel it’s important to review what we have done and what we haven’t.
In this post, we’ve discovered that controls on the border of a SmartForms window are NOT monitored by SmartForms event listeners. This is a really significant discovery.
In the case of the “X” (close window) click, we’ve learned we’re able to get around the inability to monitor the click event by creating a control with an event that mimics it. We’re able to handle that event in the form to produce a desired result.
*We’re not able to directly handle the “X” click; we’re able to handle the action of the subview closing. The net effect is the same, but we’re addressing the result, not the actual cause.
It’s indistinguishable to the user, but an important distinction behind the scenes.
X Gon Give it to Ya © Sony/ATV Music Publishing LLC, Kobalt Music Publishing Ltd., Warner/Chappell Music, Inc
You must be logged in to post a comment.
|
OPCFW_CODE
|
When website builders came into the scene, they completely changed the game in website creation. Earlier on, the only way to own a website was to build one from the ground up. This meant that web developers had to be engaged; alternatively, one had to learn to code.
Today you can bring a website to live without hiring a single Web Development Agency. You do not have to code even a single line either. All this is possible because of the miracle that is website builders. These web hosting solutions come with pre-built templates and a drag-and-drop functionality that allows one to build a professional looking website in a day or two.
What Is A Website Builder?
To some extent, website builders are quite impressive. They totally substitute coding and design knowledge such that even someone who doesn’t possess any web building knowledge can still create a professional website. Therefore, website builders take CSS, HTML and all other coding languages and throw them out of the window. In their place, they bring in a set of templates, pre-set code, and predetermined functions.
Weebly, WIX, Squarespace, and WYSIWYG are some of the readily available website builders. Through them, anyone can follow a step-by-step wizard to build a website easily, cheaply and the site will still look pretty decent.
What Does A Web Designer Do and How Is to Different?
Professional web developers do not use website builders. They instead do the hard coding work that builds your site from scratch. The site is built offline and then uploaded when complete.
Here’s an analogy that delineate between website builders and professional designers. During a wedding, people want to capture every beautiful aspect of their momentous occasion. To achieve they can simply have one of their friends a DSLR camera and ask them to click their heart away. This would be tantamount to engaging a website builder. On the other hand, the wedding couple may decide to hire a professional photographer who would artistically and beautifully capture the wedding. This is similar to hiring a professional web designer to build your website. It’s obvious that the results here would be outstanding and unique.
Advantages and Disadvantages of Website Builders
For obvious reasons website, builders are appealing to many because:
- They are easy. The step-by-step wizard and the drag-and-drop approach used by website builders do away with the daunting task of website construction.
- They are user-friendly. You do not have to know or understand the code for you to get your website on a website builder. All you do is combine the very best of the prebuilt templates available to come up with a visually appealing website.
- They are relatively cheap. The entire premise upon which website builders exist is that website creation should be easy and cheaper. Therefore, the rates are competitively priced across all website builders.
If you closely examine any one of the superior websites, you’ll realize that none of them relies on website builders. This is because they are obvious advantages of using a website builder.
Some of the downsides include:
- The numerous limitations in coding. For instance, it is impossible to export code. One can’t even edit that CSS or HTML files in the builder. This makes it impossible to switch to other platforms.
- The storage is also limited. One can’t exceed the allotted bandwidth and storage for their site. One has to pay for more storage as the site grows.
- Some features are restricted depending on the package you get. As such, a website may be limited to personal use and only have a certain number of pages
- The prebuild templates are also used by thousands of other users hence this makes your website lose its uniqueness.
Advantages and Disadvantages of Professional Website Designer
Hiring a professional designer has the following advantages:
- You get a professionally designed website. Everything on the website will work perfectly. Such aspects as user interface and experience will also be on point. Intrinsically, the efficiency and effectiveness of your website will be guaranteed.
- Your website will be unique. Remember everyone else will be using the same templates from website builders. Therefore, every other site will look the same. However, when you hire professional Web Development Services, your site is built from scratch using its own code. The design too will be original and inspired by your goals and business strategy.
- You enjoy optimal SEO benefits. While most website builders have minimal tools for SEO, professional Web Design, and Development Services come packed with the finest keyword research and planning tools to fully optimize your site for search engines.
- The site will be fully optimized for visitor conversion so as to generate leads and create more sales.
- One of the prime benefits of hiring a web development agency is that you enjoy continued tech support. Therefore, when problems arise you know exactly who to call.
Professional designers do have their disadvantages too which include:
- They are expensive, to begin with. The upfront cost for engaging a Web Development Agency may be high especially for a firm running on a tight budget.
- Launching a hard-coded site may take some time hence professional web designers are not ideal for a short project with a fast turnaround time.
While there many benefits to using website builders, it is evident you stand to benefit more from hiring an expert web designer with the proper technical skills to give you an outstanding website.
|
OPCFW_CODE
|
The Agency for Digital Italy (AgID) has published the Italian INSPIRE Registry, created using the Re3gistry software.
Please check the news related to the HTTP/HTTPS URIs for the INSPIRE registry.
The INSPIRE registry service 6.4 and the Re3gistry software 1.3 have been published.
The JRC Re3gistry team
Dear MIWP-6 members,
We are gathering feedbacks, comments and suggestion for the future versions of the Re3gistry software.
We would like to ask for your help in order to proceed with the right choices for the feature to be included in the next releases.
Whether you are experienced on the Re3gistry software or not, you can give your helpful suggestions.
Please fill our quick survey at http://europa.eu/!Bn84Ct
It would be great if you could also help us in spreading this survey by passing on the information to other people in your networks, working (or planning to work) with registers.
Thank you in advance,
The JRC Re3gistry team
We are happy to announce that release 6.3 of the INSPIRE registry service has been published at http://inspire.ec.europa.eu/registry
The changelog related to this version is available here: https://ies-svn.jrc.ec.europa.eu/projects/registry-development/wiki/service_R6-3_documentation#2-Version-details
We are happy to announce release 1.2 of the Re3gistry software!
This new version contains several improvements and new features, some of which are based on feedback received from users of earlier versions of the software. The most important improvements and new features are:
- We have updated the documentation to make the Re3gistry easier to install, configure and customise. This release now also contains two complete sets of examples: One that allows to replicate a "registry service" similar to the INSPIRE registry service (with some example data) and an additional example that contains more generic data and a neutral web interface that can be easily customized.
- Following user feedback, the Re3gistry now also supports a new authentication method in addition to ECAS, which was suitable to be installed only on servers trusted by the European Commission. The new authentication method (based on Apache SHIRO) allows the system to be installed on any server.
- The Re3gistry now includes a new web application that provides a simple method (based on the JSON format) to set up and customise the service's web user interface (HTML representation).
- It is now possible to manually start the data export of the full content of the database from the Re3gistry administration interface
Questions, issues for discussions, bugs and suggestions for new features can be submitted in the issue tracker (https://ies-svn.jrc.ec.europa.eu/projects/registry-development/issues).
Thank you in advance for helping us improve the Re3gistry software.
The JRC Re3gistry Team
We are happy to announce that release 6.2 of the INSPIRE registry service has been published at http://inspire.ec.europa.eu/registry
The changelog related to this version is available here: https://ies-svn.jrc.ec.europa.eu/projects/registry-development/wiki/service_R6-2_documentation#2-Version-details
We are happy to announce that release 6.1 of the INSPIRE registry service has been published at http://inspire.ec.europa.eu/registry
The changelog related to this version is available here: https://ies-svn.jrc.ec.europa.eu/projects/registry-development/wiki/service_R6-1_documentation#2-Version-details
Dear discovery service contact points, MIG representatives and NCP’s,
we would like to inform you about a number of changes in how we develop / maintain the INSPIRE geoportal (including its harvesting and validation components) and how we communicate with Member States about any questions or issues related to the harvesting and validation of metadata form the discovery services in the Member States.
In the past, the INSPIRE validator, which is operated as part of the Geoportal harvester, was updated whenever new issues were identified or when the time schedule of the legal obligations required this.
In order to guarantee a more stable behaviour over longer time periods, we decided to switch to a managed release cycle, accompanied by a clear list of the changes and by a timely communication before the release. Relevant communications will be published through the MIG collaboration space, and contact points will receive an automatic update e-mail (see below).
In order to streamline and keep better track of the communication between the geoportal team and the contact points for the discovery services in the Member States, we will start using a dedicated project on the MIG collaboration space (including a wiki, issue tracker and news section) as the main communication channel instead of e-mail. The project will be private and will be open to all registered discovery service contact points as well as interested INSPIRE NCPs or MIG representatives.
In order to get access to this Geoportal project on the MIG collaboration space, please send an e-mail to firstname.lastname@example.org. If you have never used the MIG collaboration space for other projects, please also send us your ECAS login.
If you have any questions, please let us know.
The JRC INSPIRE team.
We are happy to announce a new candidate release of the INSPIRE registry and registers. The new candidate release contains two additional registers with the layers and enumerations defined in the INSPIRE implementing rules on interoperability of spatial data sets and services as well as an updated HTML user interface.
Before officially publishing these, we would like to invite you to participate in the testing of both the content and new user interface of the service.
Instructions for testing as well as the draft documentation for the new INSPIRE registry service are available in a specific sub-project in the MIG collaboration space.
The testing will run until Monday, 30 November 2015.
If you are interested in testing new release candidates of the software or service, please register in the MIG collaboration space and send an e-mail to email@example.com.
Questions, issues for discussions, bugs and suggestions for new features can be submitted in the issue tracker. Note that in order to submit new issues, you have to be signed into the collaboration space and have been assigned to the testing project. Please also check the existing issues before submitting a new issue.
Thanks in advance for helping us improve the INSPIRE registry and registers as well as the Re3gistry software.
The JRC Registry Team
Also available in: Atom
|
OPCFW_CODE
|
The BCS Appathon Challenge at Greenwich
The BCS Appathon set out to engage as many people as possible in the UK during one hour in programming an app for their mobile phones. Just over 50 people took part at Greenwich: students, staff, families and members of the general public.
The Appathon's aim was to get participants to develop their own simple app during a one hour highly interactive workshop. The workshop continued on allowing Appathon attendees to work developing apps of their own design. In the concluding session, participants had an opportunity to present their apps in a “show and tell” activity.The success of the Appathon has encouraged staff in the department to think of how it could be employed with first year students who are active smart phone users but find programming difficult and lack confident when starting to learn programming. By putting first year students through the Appathon experience, we hope to create a large pool of student ambassadors who can work with us in taking the Appathon to local schools and using it as a taster event at our open days.
Bateman, K. (2015) ‘BCSWomen hosts Guinness World Record attempt with marathon app-building event.’
Computer Weekly, 16th June 2015. Available at: http://www.computerweekly.com/news/4500248183/BCSWomen-hosts-Guinness-World-Record-attempt-with-marathon-app-building-event (Accessed: 22 April 2016).
BCS Royal Charter (1984) Available at: http://www.bcs.org/upload/pdf/royalcharter.pdf (Accessed: 22 April 2016).
BCS Student Chapters webpage. Available at: http://www.bcs.org/category/18176 (Accessed: 22 April 2016).
Dee, H. (2015a) ‘BCSWomen Appinentor Family Fun day workshop.’ Available at: http://www.hannahdee.eu/appinventor/ (Accessed: 22 April 2016).
Dee, H.( 2015b) ‘Train the trainers.’ [exercises and videos] Available at: http://users.aber.ac.uk/hmd1/appinventor/ttt.html (Accessed: 22 April 2016).
Dee, H. (2015c) ‘Android Programming “Family Fun Day” using AppInventor.’ Available at: http://www.hannahdee.eu/appinventor/appinventor_handout2.0.pdf (Accessed: 22 April 2016).
Greenwich Connect Blog. Available at: http://blogs.gre.ac.uk/greenwichconnect/ (Accessed: 22 April 2016)
MIT App Inventor “About Us” webpage. (2012-2015) Available at: http://appinventor.mit.edu/explore/about-us.html (Accessed: 22 April 2016).
Scratch webpage. [Scratch is a project of the Lifelong Kindergarten Group at the MIT Media Lab.] Available at: https://scratch.mit.edu/ (Accessed: 22 April 2016).
Video Record of the BCSWomen Appathon at the University of Greenwich on 13 June 2015 (2015) Recorded by Taylor, Conrad. Available at: https://www.youtube.com/watch?v=aJGJ-ofaRxc&feature=youtu.be (Accessed: 22 April 2016).
Wicks, A. (2014) Why is programming hard? [Series of videos on YouTube.] Part 1 available at: https://www.youtube.com/watch?v=BRaBWIcAqlQ (Accessed: 22 April 2016).
- There are currently no refbacks.
|
OPCFW_CODE
|
In our first blog describing our microservices journey, Victor went over our reasons to move to a microservices architecture. In this article I’ll describe how we started development on our first microservices and making some upfront decisions on technology.
What to build first
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Which programming language to use
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
Microservice A <> Microservice B
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices at Coolblue will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
With these fundamental decisions in place we’ve ended up with a pretty big challenge ahead. Not only we’ve had to built the first microservice within Coolblue, but also gain knowledge in a new programming language and a new datastore. If that is not enough, all elements needed to be deployed onto a whole new environment, an environment which we wanted to have in a configuration management system.
In our upcoming articles we will elaborate on all these challenges. Describing our continuous deployment, API decisions and many more. Sign up for the newsletter to stay up to date.
|
OPCFW_CODE
|
In English we use the verb “to be” in three main ways.
I) The “is” of existence.
Sometimes we assert the existence of a certain kind of thing.
Cats exist, at least one cat exists, some cats exist, there is a cat (x)(Cx)
Unicorns do not exist ~(x)(Ux)
II) The Is of predication.
Sometimes we attribute properties to a thing or assign an individual or group of individuals to a class. For this kind of use of “to be” we don’t need a special sign since it is built into the predicate letter.
Alex is smart. Sa S – is smart
Moab is a whale. Wm W – is a whale
III) The “is” of identity
But we also use the verb “to be” to represent the relationship of identity. For example, we can say “Mark Twain is Samuel Clemens” meaning that the person named by the name “Mark Twain” is identical to the person named by “Samuel Clemens.” This is the easiest type of identity statement to represent since it contains only proper names and the “is” of identity. There is some debate about exactly what information is contained in such a sentence. But in the logical system we are studying here we have an identity predicate (represented by “=”), that takes two objects, the two objects about which identity is being predicated (Mark Twain and Samuel Clemens).
In our system we would translate each proper name using a constant and use the identity predicate (=) to claim that the individuals named by each constant is identical. m = c Identity statements containing only proper names are the most particular kind of sentences, since they refer to individuals. However, sometimes we want to abstract. For example, we might want to say that every individual is identical to itself. To do this we would use the identity predicate (=) combined with a variable and a quantifier.
PROPERTIES OF IDENTITY The identity relation has several properties. It is
To state these rules we need to write more abstract formulas, using variables and quantifiers.
Notice that every individual is identical to itself. If there were a finite number of individuals in the universe we could assert these truths one by one?
m = m
s = s
However, since this relation holds between everything in our universe of discourse, we can use the variable x and a quantifier to state the general rule:
(x)(x = x) Everything is identical to itself
II) SYMMETRY (Or Commutativity)
Note that if Mark Twain is identical to Samuel Clemens, Samuel Clemens is also identical to Mark Twain (i.e. m = s s = m). We can state this abstractly as follows:
|
OPCFW_CODE
|
Are you an energetic and passionate analytical engineer? Do you have an interest in working with a global company in an industry that impacts everyday lives? Are you interested in the opportunity to work with multiple technologies including Diesel, Natural Gas, and Battery Electric Trucks?
If so, we are currently seeking an Associate Engineer, 1D X-Domain Analyst to join our Chassis Analysis and Advanced Calculation team in Greensboro, NC.
Read on for more details!Who We Are
Our team, Advanced Calculation, is an important part of the Vehicle Engineering function for Volvo Group Trucks North America. Advanced Calculation is a dynamic team of highly competent engineers who specialize in various CAE disciplines. We are responsible for ensuring functional requirements are met for durability & reliability, thermal management, cooling performance and other mechanical systems for heavy duty trucks developed for North America.
Together with us, you will be part of a global and diverse team of highly skilled professionals. We have a strong culture based on our company values, which are central our work. We believe in a work environment where:
What You Will be Doing
- We constantly strive for outstanding Performance.
- We are obsessed with Customer Success.
- We initiate Change to stay ahead.
- We willingly place our Trust in each other.
- We have a huge Passion for what we do.
As a 1D X-Domain Analyst, you will be responsible to compile and present 0D/1D complete vehicle simulation results to system responsible engineers, project management and testing teams. You will work closely with various engineering teams i.e. chassis, cab, electrical, powertrain, software and testing teams to perform simulation tasks to meet North American project requirements.
The position entails a high level of collaboration with cross-functional engineering teams working on future products, analyzing complex thermo-mechanical systems, electrical, controls and ensuring functional requirements are met by providing design recommendations.
You will be expected to provide professional expertise as a 1D analyst and be able to work closely with various engineering stakeholders and perform 0D/1D systems analysis for the North American Truck projects. In addition, you are expected to support developing and maintaining vehicle model libraries.
Who You Are
- Build 1D system models to support complete vehicle cross-domain simulations
- Help define System/Sub-System functional requirements to meet complete vehicle targets
- Work with system responsible engineers in Chassis, Cab, Electrical, Powertrain and SW to perform complete vehicle system simulations
- Provide complete vehicle models to sub-system engineers to support individual system development
- Support engineering teams with integrating sub-system models into the complete vehicle simulation models
- Work with testing teams to perform DAQ testing
- Analyze field, test & simulation data to help calibrate 1D models
- Help develop and maintain model libraries and repositories
- Work with global teams to improve methods and model archives
You are a talented and customer-focused individual who can effectively communicate with component stakeholders across the vehicle engineering organization, and ensure that the components and systems meet various design targets (e.g. temperature, flow, pressure, energy consumption etc.) in terms of performance, functionality and life expectancy.
- Bachelor’s Degree in Mechanical Engineering
- 2+ years of experience in thermo-fluid and/or mechanical systems
- Strong background in fundamentals of fluid dynamics, heat transfer, thermodynamics, and mechanics
- 1D simulation tools such as GT-SUITE, Kuli or Amesim, Matlab/Simulink, etc.
- Experience with validation and experimental measurements using Design of Experiments
- Experience with PDM tools (Windchill, KOLA, PROTOM etc.) for building variant combinations and BOM’s
Compensation & Benefits
- Experience with modeling and simulating automotive cooling systems, thermal management, and mechanical/control systems
- Experience with Python or other similar data analysis tools
- Controls and software experience
- Experience with Git and version control methods
Are you ready to join our team and shape the future of the transportation industry together with us?
- Competitive medical, dental and vision insurance.
- Generous paid time off including paid caregiver and parental leave policies.
- Competitive matching retirement savings plans.
- Working environment where your safety, health and wellbeing come first.
- Focus on professional and personal development through Volvo Group University
- Programs that make today’s challenging reality of combining work and personal life easier.
- Want to learn more about these programs? Continue your exploratory journey with us here.
|
OPCFW_CODE
|
How to restore GRUB2 using an Ubuntu Live CD or Thumb Drive 09/02/2010Posted by muyiscoi in Guides.
Tags: grub, installaton, tips, tutorial
If you are a tinkerer like me, you will no doubt run into some problems with grub at one time or another.
Even though you do not tinker, you might still have a problem with grub especially if dual-boot your system with windows. In that case, the windows boot-loader overwrites GRUB in the Master boot record (MBR) (if you install Ubuntu first) thereby rendering your Linux partition not bootable. It has been a major bottleneck for a lot of Ubuntu users when they can no longer boot into their desktop and most times are forced to reinstall.
It is however, relatively easy to restore GRUB on your computer irrespective of how you lost it in the first place.
The only requirements are that you still have a healthy installation of Ubuntu on your machine and a ready live CD or a USB thumb drive with Ubuntu loaded on it.
I will assume that the readers of this post have some level of knowledge on how to get certain things done on Ubuntu so i will not be too specific in some areas. if anything is unclear, you can ask it in the comments and i will be glad to clarify.
First, you have to boot the live CD. If you are using a thumb drive, also boot from it.
After booting, you have to determine which of your hard-disk partitions is the root (/) partition. You can do this by typing
sudo fdisk -l
in the terminal.
Note: If you only have one partition or you already know the address of your root partition, you can skip this step.
The output of the above command on my computer is as shown below
Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb21e563c Device Boot Start End Blocks Id System /dev/sda1 * 1 15147 121664576 7 HPFS/NTFS /dev/sda2 15148 15427 2249100 82 Linux swap / Solaris /dev/sda3 15428 30401 120277697+ f W95 Ext'd (LBA) /dev/sda5 15428 17251 14649344 83 Linux /dev/sda6 17252 30401 105627343+ 83 Linux
Admittedly, this can be a little confusing and might not really tell you which of your partitions is the root partition. Most of the time, what i do is to open up the file manager (nautilus) and check for the partition that resembles my root partition either as a result of the size or maybe a label (if you’re d kinda person that does that). i will then mount that partition from nautilus. After that, i go to the terminal and type the following
$ cat /etc/mtab
Amongst all the text that will be printed, the one containing the address of your root partition will be there. This is the output from my own system. Note that this will not show up if the partition is not mounted so ensure that only the root partiton is mounted so u dont get confused. alternatively, you can use gparted to also know the partiton address of the root partition.
/dev/sda5 /media/a2455o3f539io43pofp342 ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 none /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 none /dev devtmpfs rw,mode=0755 0 0 none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 none /dev/shm tmpfs rw,nosuid,nodev 0 0 none /var/run tmpfs rw,nosuid,mode=0755 0 0 none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0 none /lib/init/rw tmpfs rw,nosuid,mode=0755 0
The line showing my root partition is highlighted in bold on d text above. From this, you can deduce the address of your root partition. I can see that mine is /dev/sda5. Once this has been done, the next step is to mount that partiton in a more easily accessible place. Ubuntu will by default mount the partition according to its UUID (Universally Unique Identifier) in the /media folder as you can see above. However, this makes for a very long file path and makes it very easy to make a mistake.
Go to nautilus again and this time, unmount the previously mounted partition.
Then, use the terminal to mount that partition in the /mnt folder.
In my case, i will use the following command
$ sudo mount /dev/sda5 /mnt
Replace /dev/sda5 with a appropriate address for your root partition gotten above.
after this, it is time to install grub2. To do this, use the following command
$ sudo grub-install --root-directory=/mnt /dev/sda
This command should not be edited if you followed all the steps above. this installs grub2 in the MBR of the Harddisk. This will take a short while after which a confirmatory message will come up.
That is it. To confirm your settings, type in
$ sudo update-grub
in the terminal. However, even if this gives an error message, most of the time, the installation is already done. If you receive an error message though, you might not see other operating systems on your Computer especially if they were installed after the Ubuntu installation. To remedy this, boot into your freshly repaired Ubuntu and run the above command from there. That should sort it out.
So, that’s it. I hope this helps someone out there get out of a bind. Any further questions can be posted on the comments.
Source: Ubuntu Wiki
|
OPCFW_CODE
|
For which partial orders is the Axiom of Choice necessary to prove the existence of ultrafilters?
This question is about ultrafilters on partial orders. Using the Axiom of Choice, one may prove that every partial order has an ultrafilter on it. For certain partial orders, some form of AC is known to be necessary, but for other partial orders this is not so:
Example 1: (ZF) If $P$ is atomic, then it has an ultrafilter.
Example 2: (ZF) If $P = \omega^{<\omega}$ is the set of all finite sequences of integers, ordered by extension, then it has an ultrafilter. Any $r \in \omega^\omega$ can be used to define an ultrafilter on $P$, namely the set of all finite initial subsequences of $r$.
My question is whether we can determine that the statement "there exists an ultrafilter on $P$'' is independent of ZF just by looking at the combinatorial properties of $P$ (without needing to go through a forcing argument to construct a symmetric model).
I don't expect an exact characterization. But I would love to know a reasonably general sufficient condition for implying that ZF is not strong enough to prove the existence of an ultrafilter on $P$.
My motivating example:
Let $P$ be the set of all dense subsets of $\mathbb R$, ordered by inclusion. I came across this partial order recently while thinking about a problem in topology, and I'd like to know whether ZF alone proves that this partial order admits an ultrafilter.
I strongly suspect that the answer is no (though I have not yet tried to work out the details). Right now, the only way I know to approach the problem is to build a symmetric model of ZF. It seems to me that it should be possible, somehow, to know beforehand, just by looking at $P$, that something beyond ZF is needed to get an ultrafilter.
A caveat:
In my question above, we should think of $P$ as the unique partial order satisfying some definition. In particular, I want to allow that $P^M$ and $P^N$ can be two different sets if $M$ and $N$ are two different models of set theory. (For example, the phrase "the dense subsets of $\mathbb R$, ordered by inclusion" defines a partial order, but the set satisfying this definition can be made larger by forcing.)
This point is perhaps pedantic, but also pertinent. To see why, suppose $P$ is a partial order in some model $M$ of ZFC, and suppose $N \supseteq M$ is a model of ZF. Then $P \in N$, $P$ is a partial order in $N$, and, no matter what, $P$ has an ultrafilter in $N$. The proof is easy: fix $U \in M$ such that $M \models$ "$U$ is an ultrafilter on $P$'' and then observe that the statement "$U$ is an ultrafilter on $P$'' is absolute to larger models of set theory.
Example 3: (ZF) If $P$ is a constructible partial order (i.e., if $P \in L$) then $P$ has an ultrafilter. In fact, any $U \in L$ such that $L \models$ "$U$ is an ultrafilter on $P$" is an ultrafilter on $P$.
Have you looked at Herrlich's book, The Axiom of Choice?
@AsafKaragila: Yes, a bit, but I can't seem to find any theorems of this kind in there. He mentions at the top of page 58 that some partial orders have ultrafilters on them in ZF, but there seems to be no attempt to characterize those partial orders.
A sufficient criterion: if the set of nodes of $P$ is well-orderable, then it has an ultrafilter. This generalizes your example 3.
|
STACK_EXCHANGE
|
Nomad nodes remove each other services from Consul
I got two nodes Nomad cluster along with a Consul instance so that the jobs can register services to connect to.
However, the services keep getting synced and deregistered. Here is what I have from the Consul logs:
2021-01-26T14:49:59.174Z [INFO] agent: Synced check: check=_nomad-check-dc23801467b8a65a4fd82311c2606724a180065c
2021-01-26T14:50:00.072Z [INFO] agent: Synced check: check=_nomad-check-1783c554d9ee0a25d52532f4178c392e931e4bb1
2021-01-26T14:50:04.511Z [INFO] agent: Synced service: service=_nomad-task-e8d2b77b-3bf5-96c1-8323-63b6151e2cf3-lb0-lb0-admin-admin
2021-01-26T14:50:09.962Z [INFO] agent: Deregistered service: service=_nomad-task-e8d2b77b-3bf5-96c1-8323-63b6151e2cf3-lb0-lb0-admin-admin
2021-01-26T14:50:34.554Z [INFO] agent: Synced service: service=_nomad-task-e8d2b77b-3bf5-96c1-8323-63b6151e2cf3-lb0-lb0-admin-admin
2021-01-26T14:50:39.984Z [INFO] agent: Deregistered service: service=_nomad-task-e8d2b77b-3bf5-96c1-8323-63b6151e2cf3-lb0-lb0-admin-admin
2021-01-26T14:51:04.589Z [INFO] agent: Synced service: service=_nomad-task-e8d2b77b-3bf5-96c1-8323-63b6151e2cf3-lb0-lb0-admin-admin
2021-01-26T14:51:10.009Z [INFO] agent: Deregistered service: service=_nomad-task-e8d2b77b-3bf5-96c1-8323-63b6151e2cf3-lb0-lb0-admin-admin
Both nodes are started with the same configuration. However when I look at the logs at TRACE level, I have the following:
NodeA:
2021-01-28T15:58:55.519+0100 [DEBUG] consul.sync: sync complete: registered_services=3 deregistered_services=1 registered_checks=0 deregistered_checks=0
NodeB:
2021-01-28T15:58:59.037+0100 [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=3 registered_checks=0 deregistered_checks=0
Indeed, NodeA has got 3 jobs running while NodeB got 1. It seems both nodes is reverting the changes made by the other one.
Name Address Port Status Leader Protocol Build Datacenter Region
NodeA <IP_ADDRESS> 4648 alive false 2 1.0.2 us1 us
NodeB <IP_ADDRESS> 4648 alive true 2 1.0.2 us1 us
Did I miss something in my configuration? How to prevent this?
This behavior is actually displayed in the documentation. I just overlooked it:
An important requirement is that each Nomad agent talks to a unique Consul agent. Nomad agents should be configured to talk to Consul agents and not Consul servers. If you are observing flapping services, you may have multiple Nomad agents talking to the same Consul agent. As such avoid configuring Nomad to talk to Consul via DNS such as consul.service.consul
|
STACK_EXCHANGE
|
Debian Security Advisory DSA-4550-1 email@example.com
https://www.debian.org/security/ Moritz Muehlenhoff
October 25, 2019 https://www.debian.org/security/faq
Package : file
CVE ID : CVE-2019-18218
A buffer overflow was found in file, a file type classification tool,
which may result in denial of service or potentially the execution of
arbitrary code if a malformed CDF (Composite Document File) file is
For the oldstable distribution (stretch), this problem has been fixed
in version 1:5.30-1+deb9u3.
For the stable distribution (buster), this problem has been fixed in
We recommend that you upgrade your file packages.
For the detailed security status of file please refer to
its security tracker page at:
Further information about Debian Security Advisories, how to apply
these updates to your system and frequently asked questions can be
found at: https://www.debian.org/security/
Mailing list: firstname.lastname@example.org
RPM Packages =>
Thank you for the notification.
This SRPM has no registered maintainer, so assigning the bug globally.
CC'ing DavidW both for security, & previous committer (I think); also Thierry for the latter.
Did not notice:
> this problem has been fixed in version 5.35-4
We have 5.37-1 . So this bug may possibly be outdated.
(In reply to Lewis Smith from comment #2)
> Did not notice:
> > this problem has been fixed in version 5.35-4
Thats the version / release that debian added the fix in...
> We have 5.37-1 . So this bug may possibly be outdated.
fix added in file-5.37-1.2.mga7 currently building
Zombie, please provide a link to the advisory and don't copy and paste the text.
Lewis, I am the security group, so I already get the e-mails. You don't need to CC me.
Advisory link from October 25:
Upstream commit that fixed it:
No new upstream release with the fix yet.
Updated file packages fix security vulnerability:
A buffer overflow was found in file which may result in denial of service or
potentially the execution of arbitrary code if a malformed CDF (Composite
Document File) file is processed (CVE-2019-18218).
file security vulnerability (CVE-2019-18218) =>
file new security issue CVE-2019-18218
Mageia 7, x86_64
Heap buffer overflow test case is available for the clusterfuzz framework, not generally available to the public.
Updated file and the referenced packages.
$ file -C
generated a magic.mgc file.
$ file magic.mgc
magic.mgc: magic binary file for file(1) cmd (version 14) (little endian)
Exclude ASCII text files:
$ file -e ascii *
1mbg1sqo.default-release.tar: POSIX tar archive (GNU)
binbag.tar: POSIX tar archive (GNU)
bin.tar: POSIX tar archive (GNU)
Calibre Library: directory
$ cd text
$ file * | grep ASCII
amazon: ASCII text
areca: ASCII text, with very long lines
emails: ASCII text
faad.txt: ASCII text
$ file -e ascii * | grep ASCII
$ file -d *
produces a lot of internal debugging information.
Show valid extensions for file types:
$ file --extension * | egrep "jpg|png"
$ ls ruby > rubylist
$ cd ruby
$ file -f ../rubylist
widgetlist.rb: Ruby script, ASCII text
wrap.rb: Ruby script, ASCII text
yieldself: Ruby script, UTF-8 Unicode text
This was unexpected:
$ file -e elf /usr/bin/file
/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV)
$ file --mime Downloads/* > mime
$ cat mime
Downloads/092019_67P2.jpg: image/jpeg; charset=binary
Downloads/astro: inode/symlink; charset=binary
Downloads/Astronomy_Now_Newsalert.vcf: text/vcard; charset=us-ascii
Downloads/big.png: image/png; charset=binary
Downloads/blender_manual.zip: application/zip; charset=binary
Downloads/Buxtehude_NetherlandsBachSociety.mkv: video/x-matroska; charset=binary
Downloads/HelloLucene.java: text/x-c; charset=us-ascii
Downloads/load-unicode-data.tex: text/x-tex; charset=us-ascii
Downloads/nearstars: text/html; charset=utf-8
Downloads/periodic.html: text/html; charset=us-ascii
$ file -b ThePlanets_1_1.ts
$ file PJFB_HR_2m.mov
PJFB_HR_2m.mov: ISO Media, Apple QuickTime movie, Apple QuickTime (.MOV/QT)
$ file --apple PJFB_HR_2m.mov
$ sudo file -s /dev/sda*
/dev/sda: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,1), end-CHS (0x3ff,254,63), startsector 1, 468862127 sectors, extended partition table (last)
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=d78f09de-9c0e-40b5-96ec-bc1d3883c0b6 (needs journal recovery) (extents) (64bit) (large files) (huge files)
Just a sample of the options. They work.
An update for this issue has been pushed to the Mageia Updates repository.
|
OPCFW_CODE
|
I have had to set up Merge replication on a small custom database located in the US to a server in China. Our company’s WAN and uplinks at the branch offices leaves a LOT to be desired. Anyways, here’s the steps I took.
Publication database is on a Windows 2008 R2 Highly Available Cluster.
Databases are both on SQL Server 2008 R2 instances.
Create a clustered file share for the replication snapshot files to be located.
Create an AD service account to run the Merge Agent and Snapshot Agent.
Grant the appropriate permissions for the service account to the cluster share.
Run the create merge publication script (I’ll share later).
Log onto the remote server.
Create the database that will be the subscriber.
Run the subscription script (I’ll share later).
Back on the publisher DB, execute sp_addmergesubscription with the details needed.
On the subscriber, add the users and logins as needed and enjoy.
Since this is a small database, and there aren’t a lot of heavy hitting databases on either server, I decided to allow the publisher also act as the distributor. I can always break and change that if needed later on.
Wow, is this released, and are people using it? I figured I’d install this at home on a VM in my HyperV server to pre-test before doing the same at work. We’re entertaining the idea of dropping NetApp’s Snapdrive for Windows and Snapmanager for SQL on our primary datacenter filerheads, and I wanted to find a replacement or possibly an upgrade.
After 3 failed attempts to install, all I can say is “Sloppy”. The failures have been during DPM “Reporting Services configuration”. All the reports are deployed, and I can view the reports via the URL. But I get the “dreaded” 812 error saying some generic reporting services error ocurred. Advice on the web has to do with SSL and RS being configured to use HTTPS. Not in my case. Digging in the logs, I see this error pop up “Mojito error was: PasswordTooShort”. References on the web suggest that the password does not match the domain GPO policy. However, the password is the same that I’m using for my domain account, so forget that idea. No resolutions that I have found, one attempt from some MS guy was to try to do a net user /ADD. Guess what, that works without error. Great one MS.
Trying to install on Windows Server 2008 R2. Oh, and here’s another problem, probably a Server 2008 R2 problem. Try to reinstall, get an error that the database DPMDB on instance MSDPM2010 exists, and to delete prior to reinstall. Guess what, if you open a command window and try to bring up the DAC via sqlcmd -S localhost\MSDPM2010 -A you get “Login Failed for user that did the install”. Oh but forget it – you, open a command window “As Administrator” and it works fine. I absolutely hate running SQL Server on Windows Server 2008 R2.
Our company already has an infrastructure monitoring tool, and it plays very nicely with SQL Server versions 2000 through 2008. It is primarily a monitoring tool, and so is somewhat limited on historical reporting or trending.
I’m currently working on an SSIS package that I can run on a schedule from a central SQL Server that will go and gather data from my other servers. One function that I’ve been testing is the sp_readerrorlog procedure to capture and then merge the SQL Log files into a single table.
Other options I’m looking to start include an hourly file sizing report by database. This would allow me to make some fancy trending reports to the management types so they can watch the growth of their data.
Another is performance information like Plan Cache and Buffer Cache, User connections, Agent Job history… etc.
Here’s how I query to get sizing info for each table in a database
BEGIN TRAN GETTABLESIZES
DECLARE @TempSIZINGTable TABLE
( [Table_Name] varchar(50),
INSERT INTO @TempSIZINGTable EXEC('sp_msforeachtable ''sp_spaceused "?"''')
sp_spaceused adds a KB to the end of the sizing fields
we use substring and a Pattern Index search for a space,
so that 1024 KB turns into 1024 which we convert to an int
CAST(substring( [TABLE_SIZE],1,PATINDEX('%[ ]%', [TABLE_SIZE])-1) AS INT) AS [TABLE_SIZE in KB],
CAST(substring( [DATA_SPACE_USED],1,PATINDEX('%[ ]%', [DATA_SPACE_USED])-1) AS INT) AS [DATA_SPACE_USED in KB],
CAST(substring( [INDEX_SPACE_USED],1,PATINDEX('%[ ]%', [INDEX_SPACE_USED])-1) AS INT) AS [INDEX_SPACE_USED in KB],
CAST(substring( [UNUSED_SPACE],1,PATINDEX('%[ ]%', [UNUSED_SPACE])-1) AS INT) AS [UNUSED_SPACE in KB]
ORDER BY [TABLE_SIZE in KB] desc
COMMIT TRAN GETTABLESIZES
|
OPCFW_CODE
|
There are many games that follow a kind of system where a period of time goes by, and a wave of enemies enters a game world area that must be completely destroyed. Many of them are a little fun, and addictive, so having a system like this worked out is a good first step for making a few games that make use of this.
1.3 - An exp point system
This is my first attempt at making a module, reusable experience point system that I can take with me from one project to the next. I have used this system in a few of my canvas examples, but I am not happy with it. Still I would say that I have managed to get a few things solid with this system to say the least. I think that a good experience point system should provide at least two pure functions that will both return a kind of standard level object. One pure function where the experience points are know, but the level is not, and another that will work where the level is known, but the experience point values are not known.
1.4 - Fizz buzz
1.5 - Grid Game Unit Movement
In this example I have a grid and I am working out some basic logic when it comes to moving those units around in the grid. There is a bit to it actually when it comes to making a system for this sort of thing from the ground up. However what is also great about it is that it is not so hard to get something working, and once I have a basic system for the kinds of games that I have in mind I can use it to not just make one game but a few taking this system with me to each new project. However as of this writing this one is still a work in progress that I have not put as much time into as I would have liked to. I do have a lot of other things going on that get in the way of me working much of this stuff out.
1.6 - A long once method
One thing that I would like to have as part of a basic utility library that I take with me from one project to another is a log once, or call once type method. When first starting out with the basic of debugging there is using the console.log method to log things out to the javaScriot console as they happen. I do not think that this is such a bad way to go about debugging, and I still find myself doing it, however there are some things to gain from starting to use my own system for logging things that are going on.
So there is having a simple expression like 3 \/ 4 that will result in a value between 0 and 1 that will be \0.25. In other words there is having a numerator and denominator value and getting a fraction between the two. However if a numerator value starts at 0 and approaches the denominator value as a fixed static rate, then the change happens along a straight line when graphed.
1.9 - Skill Point System
This is a skill point system that I put together to make use of in some canvas examples that might call for such a system. The general idea here is that in a game where there is an experience point system on each level some skill points will be given to the player. These skill points can then be invested into upgrades that have various effects on a main game state object.
1.10 - Sort planets
A simple sort of planets objects example that I might use if a future game if I ever get around to it. The idea of this example is that I just wanted to make a simple fun little example that makes use of the array sort method to which I wrote a quick blog post on. I wanted to go at least one step beyond just having a simple copy and paste hello world style example of array sort, and with that goal in mind I guess this example is more or less just that. I am not sure if I will every get around to expanding on this by making a real game based off of it, but in any case I all ready have an interesting starting point for something here to say the least.
1.11 - tax brackets
A tax brackets example that helps me to get a general idea of how a progressive tax system works when it comes to things like income tax.
1.12 - Zig Zag Arc
Another basic example that makes use of some methods I work out in my percent module example.
|
OPCFW_CODE
|
Pete linked to this article about RoR turning 2 years old. Now, I have no opinion on Ruby of any sort. I've never used it, so...how could I have an opinion about it? If I have time I may take a look at it , but I have little motivation to do so. My love of programming is infused by the CLR and I see no reason to switch my technology base (yet again). And, personally, given all the hype over Ruby I'm sitting on the fence until it plays itself out and we see where it works well and where it doesn't.
See, Pete got one view of David's post, and his was positive. Me? David actually turned me off from RoR because of his arrogance:
One of my guilty pleasures is proving people wrong. Few things get me more fired up to achieve than hearing how inappropriate or idealistic or unrealistic the idea that I'm pushing is.
I get fired up solving problems. I don't get fired up or take pleasure in proving people wrong. That seems counterproductive.
There are many others, though, who do not share that allegiance of priorities. The kind of people who’ve labeled Ruby and Rails merely buzzwords of a transient hype, soon to be forgotten, soon to be extinct. When I started pitching the idea of a new framework that would ship a picture rather than a puzzle and used a niche language to boot, these people gave it anywhere from next week to six months. Then the starry-eyed teenagers would discover the next thing shiny and move on. Such is fashion, fickle at its core. Easy come, easy go.
Again, it's a viewpoint. I call it "experience". I've seen this come up with PowerBuilder, Java, XML, SOA - they were all buzzwords that permeated everything I was reading about at the time. They were better, faster, and if you weren't supporting it or learning about it you were a curmugeon who didn't want to open your mind to the latest and greatest thing. For a while, I would try and jump on the new speeding train, but I've learned that's not always the most productive use of my time.
Let’s share a brief moment of guilty pleasure for proving them wrong, then move on to the longer lasting pleasure of simply sticking to it for our own sake. And have understanding for those conditioned by past disappointments to classify all that is new and ripe with passion to be uninteresting, to be all hype, no calories. We’re past the point of infatuation, this is love, and love is inclusive. Happy birthday Rails, happy birthday Railers.
This is where I want to turn off the Ruby switch. When someone tells me that they're in love with their technology, I roll my eyes and move on. I don't need a language or a framework to be "passionate" about what I do. I'm not "conditioned" by past disappointments - I'm old enough to know "fool me once..."
Remember, not once have I put down Ruby in this post. I have absolutely no opinion about it. I'm interested in the efforts to put Ruby on (or around) the CLR and the JVM. I'm interested in learning the language. But, really, it all comes down to 1s and 0s and I'm much more interested in using technologies that allow me to create things I'm excited about and to solve my client's problems. I'm sure Ruby does that for some people. The CLR does it for me, and believe me, that makes me a very happy camper. And if you love RoR, then talk about how you love it; don't bash others who don't use it and assume they're all just a bunch of old farts who have been beaten up so much they don't ever want to move out of VB6 land. That screams of politics and I could care less about politicians.
Actually, for my talk at the Twin Cities Code Camp, I looked at Ruby.NET, but my talk is really geared towards language interoperability. Scripting languages are great as glue languages, but they're really not meant to create frameworks that other languages can consume (that last part of the sentence is key). Believe me, I tried - I made a simple
Customer class, and I liked the terseness of Ruby to pull it off. But...trying to use that class in C#, or F#, or Spec#, or VB...well, I gave up after I saw something called an
ActivationFrame in the initialization method. Being a Ruby novice, though, I may have missed things that would make its interoperability story better, and I have 2 months to dig into it anyway :).
* Posted at 09.26.2006 08:39:05 AM CST | Link *
|
OPCFW_CODE
|
// Output warning message to console based on specified configuration.
describe('outputWarning', (): void => {
// prettier-ignore
beforeEach(async (): Promise<void> => {
// Mock: helper to output error to console.
jest.spyOn(console, 'error').mockReturnValue();
// Mock: helper to output warning to console.
jest.spyOn(console, 'warn').mockReturnValue();
});
// Case::
test('should output warning based on configuration', async (): Promise<void> => {
// Testing target.
const { Global } = await import('~global');
// Opr: output warning via console.
Global.outputWarning({ type: null, name: null, message: 'anyTxt' });
// Exp: console.warn have been called.
expect(console.warn).toHaveBeenCalledWith('anyTxt');
});
// Case::
test('should output warning based on configuration (error)', async (): Promise<void> => {
// Testing target.
const { Global } = await import('~global');
// Opr: enable output of error.
Global.error = true;
// Opr: output warning via console.
Global.outputWarning({ type: null, name: null, message: 'anyTxt' });
// Exp: console.warn have been called.
expect(console.error).toHaveBeenCalledWith('anyTxt');
});
// Case::
test('should output warning based on configuration (no message)', async (): Promise<void> => {
// Testing target.
const { Global } = await import('~global');
// Opr: output warning via console.
Global.outputWarning({ type: null, name: ['class', 'property'], message: null });
// Exp: console.warn have been called.
expect(console.warn).toHaveBeenCalledWith('[deprecated] class::property');
});
// Case::
test('should output warning based on configuration (multi message)', async (): Promise<void> => {
// Testing target.
const { Global } = await import('~global');
// Opr: output warning via console.
Global.outputWarning({ type: null, name: null, message: ['two', 'chunks'] });
// Exp: console.warn have been called.
expect(console.warn).toHaveBeenCalledWith('two chunks');
});
});
|
STACK_EDU
|
Update for latest versions of Rive and ThorVG
I would love to see this updated for the latest versions of Rive and ThorVG :)
@projectitis Hi, rive_tizen hasn't been updated for a while though you can use it since it's working. I will try to rebase the rive-tizen with the latest rive and let you know the result. (I might take some days.)
It looks like the rive renderer has changed quite a lot. Rive have added a new gradient type (SweepGradient) and also a RenderImage class (previously only RenderPath and RenderPaint).
@hermet I have started work on this. I may not be able to complete all of it, but I can make a start.
@projectitis cool, thanks for your effort!
We're not actually using the sweep gradient, it's mostly there as an example of the kind of abstraction the renderer would support. We should probably remove it for now from the renderer definition. @mikerreed
Ok, thank you @luigi-rosso! Please let me know if there are other methods that are similar and also don't need to be implemented :)
Update: I've been working on this on and off for the past week or so. Rendering curves and fills is working, but gradients (using the new 'shader' approach) are causing exceptions. I haven't started implementing the image methods/shaders yet.
I'll keep working on it when I can, but feel free to look at and/or contribute to my fork if anyone is keen: https://github.com/DeriveSDK/rive-tizen/tree/feature/update-to-latest-rive
There are also some areas where I think the ThorVG renderer can be optimized, which I'll look at later. For example I see a lot of calls to paint->duplicate() for Rive to work on copies of the ThorVG paints instead of directly on them, and I'm not entirely sure if that is required or just adding overhead?
Gradients working now. It was a pointer ownership issue.
Now only images to go (plus testing).
I now have a buggy drawImage implementation, but it's a start!
This is what it is supposed to look like:
This is what it renders as:
I have some theories for what is going wrong:
Color channels are different between tvg and rive
Images are referenced by center maybe?
Maybe translation is also happening after rotation. Not sure.
@hermet I think I have almost reached as far as I can with my current knowledge of ThorVG.
Could you create a branch on the rive-tizen repo for this? - for example update-to-v7-runtime - then I can submit a PR to that branch.
There is still the following work to be done:
Fix drawImage
Implement drawImageMesh
Implement fill using image
I have attached some files I was using for testing.
test-riv-files.zip
@luigi-rosso image fills are supported by the renderer (i.e. image shader) but I can't see how to create these in Rive. Is this also a currently unsupported feature?
@projectitis great work,
ok, so my opinion.
about duplicate(),
I believe it won't affect performance a lot, but yes we can optimize it by avoiding it in rive-tizen side?. need to check it's really neceesary behavior...
about color channel
Do you use static png loader in thorvg? it has a known channel issue, if those images are png type. If the condition is correct, you can test with this temporary patch, https://github.com/Samsung/thorvg/pull/1112.
about Rotation
Currently the pivot is fixed at the top-left corner of the tvg::paint geomtery. Though it's a bit annoying, I think you can make it properly working. For that, you can try to understand with https://github.com/Samsung/thorvg/blob/master/src/examples/SceneTransform.cpp,
Obivously, tvg need a function to set the pivot of the nodes...
about meshed Images
The texture mapping logic is prepared in thorvg (for transformed images) but it's just fixed at 2 triangle meshes,
https://github.com/Samsung/thorvg/blob/master/src/lib/sw_engine/tvgSwRasterTexmap.h
For meshed images for rive, It needs improvement, guess it may require a bunch of code to fill up to tvg apis...
about contribution.
I will prepare the branch later today.
But I think @luigi-rosso could help you the access to this repo. @luigi-rosso could you please help him?
Thesedays I'm totally into a writing a book, it's my personal task, so I'm fully working at my free time for it (due to deadline). So frankly, lack of time to code thesedays, Just hope it's finished soon then come back to opensource work, sorry for that @projectitis
Wow @hermet - that is great news about your book! I wish you good luck, and I totally understand. Please share it once you have published it! Thank you for the information :)
Good progress in some spare time after work this evening!
Color channel was fixed by the patch - at least in the case of this particular PNG. But I will need to check with all sorts of PNG images to check that the patch is ok in all cases (you are also dubious, @hermet, judging by your comments on the PR).
The image transforms were fixed by nesting the picture in a scene and offsetting it by half the width and height to move the registration point to the center.
I also fixed another potential issue with save/restore. It is now saving and restoring the transform AND the clip path (previously only the transform). This was discovered by reading the Skia docs on save/restore, but I haven't seen an error due to this yet in any rive files I have tested.
Outstanding questions for @luigi-rosso when you are able :)
Could you please create a branch for update-to-v7-runtime that I can contribute to?
Does Rive currently support image meshes or can this be ignored for now?
Any ideas on this? : The current ThorVG implementation has a strange design where the very first clipPath that is ever provided is saved as the "BG clip path" (i.e. the clip path of the art board) and every single draw call is subsequently clipped to that path, as well as any other clip path that is set. However:
The Skia implementation does not do this, but still clips correctly to the artboard
If this code is removed, the objects outside the artboard are visible, even with the save/restore fix as above
I thought I read that Rive will soon (or mavbe already does) have the option not to draw a background. If such a rive file is rendered, will the first clipPath provided still be the background/artboard clip path?
@hermet I fixed the clipping - no longer need to store the first clipping path as the "BG" clipping path!
There were a few things that needed to be done:
Do not delete clipping path after every draw (keep it)
New clipping path should intersect with existing one, not replace it
And finally, save/restore should save a duplicate of the clipping path. It was saving a pointer only, so of course the saved clipping path continued to be modified
@projectitis good job. thanks. we can incrementally improve the feature, do you think your patch is ready to go in?
@hermet The new branch is stable and works with all .riv files I have tested so far.
The only problem is that I have not updated the examples yet. I am trying to get Elementary/EFL working on Windows :(
If you prefer, the examples can be fixed later.
Ok, thank you @luigi-rosso! Please let me know if there are other methods that are similar and also don't need to be implemented :)
I see Rive has a "quad" (quadTo) verb/command. Is this used? ThorVG currently only has cubic.
No, not yet although we may use it when we add text!
Sorry for my delay! I've been on paternity leave getting screamed at by both my new baby and the toddler and I've been letting a lot of these notifications accumulate. It really is the best of times, nonetheless :) 👶🏼 🍼
Could you please create a branch for update-to-v7-runtime that I can contribute to?
I made you a write contributor, I think you can do this yourself now. Let me know if not!
Does Rive currently support image meshes or can this be ignored for now?
It doesn't in production, but a lot of our example files are already using it. The feature is slotted for launch very very soon.
Any ideas on this? : The current ThorVG implementation has a strange design where the very first clipPath that is ever provided is saved as the "BG clip path" (i.e. the clip path of the art board) and every single draw call is subsequently clipped to that path, as well as any other clip path that is set. However:
The Skia implementation does not do this, but still clips correctly to the artboard
If this code is removed, the objects outside the artboard are visible, even with the save/restore fix as above
I thought I read that Rive will soon (or mavbe already does) have the option not to draw a background. If such a rive file is rendered, will the first clipPath provided still be the background/artboard clip path?
We have the option (in the editor) to disable clipping of the artboard (so no first rect clip). Transparency is also supported (no background) but it's separate from clipping. Rive will call something like if(clip) renderer->clip(artboardPath); before calling if(background) renderer->drawPath(artboardPath);. It's a little more nuanced than that as there can be multiple backgrounds, but general idea is the same. Our Skia based viewer is a good test/known-good environment to run files in.
@luigi-rosso image fills are supported by the renderer (i.e. image shader) but I can't see how to create these in Rive. Is this also a currently unsupported feature?
Image fills are currently only supported on image draw operations. Not on vector geometry, and there's no current plan to enable that soon. I think some "design ideas" are leaking into the runtime. @mikerreed we may want to put some comments in about that or save them in a branch
Wow @luigi-rosso - huge congratulations! That is awesome news 😄 🎉
Thanks for all the info. As in the thread above, I believe the latest version of the renderer is ready to be merged - except for the examples, which I haven't updated. I don't run linux, so I'm currently trying to get EFL working on windows in order to tackle those!
@luigi-rosso congratulations :)
@projectitis I think launching the examples with EFL on windows would be a bit annoying by you. I fixed the example build breaks.
Thanks @hermet, very much appreciated, especially since you are so busy with your book :)
I have not been able to get Elementary working on windows because of some unsupported dependencies it has. But I am recreating the examples using GLFW3 and will add them to the repo later in a 'win' folder.
I am happy to close this issue.
|
GITHUB_ARCHIVE
|
We are using VBA(Excel) to pull in record sets from our ERP database. There are values from a column being returned by our query that consist of invalid characters (squares) as shown in the VB code and displayed as blank in Excel. We did verify that this value is correct in the database and returns fine in SQL.
This same VBA code works in SQL 2008 R2 using the SQL Native Client 10.0 driver.
This code does not work against SQL 2014 using 10.0, 11.0, 13.0/1.
The data type of the column in SQL is nvarchar(max) on both SQL installations.
The collation is the same on both SQL servers.
We are able to use the cast function to display a correct value instead of the invalid characters. I hoping to find some difference between SQL 2008 R2 and SQL 2014 rather than using the cast() work-around. Has anyone encountered anything similar to this?
Thanks in advance for your time.
Robert for Microsoft is the man with SQL.. but IDK if he can see this forum.
The little square, as you called it, means that there is a Unicode character that it cannot convert to ASCII. Somewhere it is being converted to varchar.
This is not a SQL Server issue. Can you post the VBA code?
The code attached can be used for two Epicor databases. The first is version 9. The second is version 10.
We see the character translation issues when running the code against the version 10 database.
So far we have not been able to identify many differences between the two, but here is one key difference. I am not certain why it would change our results.
The column character01 (nvarchar(max)) in the version 9 database is in the dbo.resource_ table.
In 10, the column is in a view called dbo.resource. The table in 10 containing character01 (same data type) is in erp.resource_ud. We tried pulling the value from both the view and the source table with the same character issue.
dbo.resource.character01 = VS8 (can also be blank)
I will add additional notes as I think of them.Edited May 24, 2017 at 17:06 UTC
The query isn't converting it. The code stops before it gets to the part where it displays the value.
Any thoughts on why using a different SQL driver (SQL Server 10.00.10586.00 sqlsrv32.dll), which I believe to be built-in to the OS, seems to work just fine with these same queries?
Again, the database is on SQL 2014 x64.
I did stumble across https://docs.microsoft.com/en-us/sql/relational-databases/native-client/when-to-use-sql-server-nativ... which does provide a level of explanation.
No, using a different version of SQL Server will not cause this. Something somewhere else is different. Most likely in Excel.
|
OPCFW_CODE
|
# frozen_string_literal: true
require 'parslet'
module PhraseParser
# This parser adds quoted phrases (using matched double quotes) in addition to
# terms. This is done creating multiple types of clauses instead of just one.
# A phrase clause generates an Elasticsearch match_phrase query.
class QueryParser < Parslet::Parser
rule(:term) { match('[^\s"]').repeat(1).as(:term) }
rule(:quote) { str('"') }
rule(:operator) { (str('+') | str('-')).as(:operator) }
rule(:phrase) do
(quote >> (term >> space.maybe).repeat >> quote).as(:phrase)
end
rule(:clause) { (operator.maybe >> (phrase | term)).as(:clause) }
rule(:space) { match('\s').repeat(1) }
rule(:query) { (clause >> space.maybe).repeat.as(:query) }
root(:query)
end
class QueryTransformer < Parslet::Transform
rule(:clause => subtree(:clause)) do
if clause[:term]
TermClause.new(clause[:operator]&.to_s, clause[:term].to_s)
elsif clause[:phrase]
phrase = clause[:phrase].map { |p| p[:term].to_s }.join(' ')
PhraseClause.new(clause[:operator]&.to_s, phrase)
else
raise "Unexpected clause type: '#{clause}'"
end
end
rule(query: sequence(:clauses)) { Query.new(clauses) }
end
class Operator
def self.symbol(str)
case str
when '+'
:must
when '-'
:must_not
when nil
:should
else
raise "Unknown operator: #{str}"
end
end
end
class TermClause
attr_accessor :operator, :term
def initialize(operator, term)
self.operator = Operator.symbol(operator)
self.term = term
end
end
# Phrase
class PhraseClause
attr_accessor :operator, :phrase
def initialize(operator, phrase)
self.operator = Operator.symbol(operator)
self.phrase = phrase
end
end
## Query object
class Query
attr_accessor :should_clauses, :must_not_clauses, :must_clauses
def initialize(clauses)
grouped = clauses.chunk(&:operator).to_h
self.should_clauses = grouped.fetch(:should, [])
self.must_not_clauses = grouped.fetch(:must_not, [])
self.must_clauses = grouped.fetch(:must, [])
end
def to_elasticsearch
query = {}
if should_clauses.any?
query[:should] = should_clauses.map do |clause|
clause_to_query(clause)
end
end
if must_clauses.any?
query[:must] = must_clauses.map do |clause|
clause_to_query(clause)
end
end
if must_not_clauses.any?
query[:must_not] = must_not_clauses.map do |clause|
clause_to_query(clause)
end
end
query
end
def clause_to_query(clause)
case clause
when TermClause
match(clause.term)
when PhraseClause
match_phrase(clause.phrase)
else
raise "Unknown clause type: #{clause}"
end
end
def match(term)
term
end
def match_phrase(phrase)
phrase
end
end
end
|
STACK_EDU
|
Does my PCB require a 50 ohm impedance trace even if I am using an external antenna?
I am using the Quectel M95 GPRS Modem on my PCB. The hardware reference guide says
"pin 39 is the RF antenna pad. The RF interface has an impedance of 50Ω."
I am using a u.efl connector and connecting a QuadBand PCB antenna to the connector. The antenna spec says impedance: 50 ohm.
Now I am confused. My layout designer is saying the manufacturer has to put a 50 ohm trace for the u.efl connector while manufacturing.
There is a Pi circuit on the PCB for RF tuning. How do I use that?
For now only a 0R is put there, but if tuning is required the values were to be put in accordingly.
If the antenna already has 50 Ohm impedance, why does the PCB trace also have be to 50 ohm?
When you send a signal along a trace, if the length of the trace is of the same order as the wavelength of the signal, then you have to match impedances or, you will get a "reflection". What this could mean in extremes is that the power actually emitted by the antenna is only a fraction of the power being pumped out by the chip - in other words, you are not efficiently coupling the chip to the antenna and you get "reflections" - this can cause your chip to overheat and maybe it might damage it.
So, if you are transmitting 3 GHz (say), it has a wavelength of 100mm and, a rule of thumb is that if your PCB track is longer than 10mm (one tenth the wavelength), you need to ensure its characteristic impedance matches the antenna's impedance.
Further reading here and here
Thanks Andy,
In other words I need that 50-Ohm impedance matched trace from the Pin's output to Connector P1 irrespective to the Antenna having the 50 Ohm Impedance.
Correct?
Dimensionally (looking at your picture and noting that you might operate at close to 2GHz), it's a close call but it barely costs a penny in copper to thicken the track to make its impedance right - there are online calculators that can be used making life easy! If this design was a 433 MHz circuit I wouldn't bother but you have to consider your max frequency.
I am connecting a PCB Antenna to the u.efl connector of the pcb
The cable that feeds the actual antenna will have a characteristic impedance of 50 ohms just like the actual antenna so it makes sense to continue this philosophy onto the PCB but having a short length that isn't 50 ohms isn't a big deal if it's short. For instance the track will have to nip-down in width to feed a chip at some point and this could never be 50 ohms but it doesn't matter.
@user2967920 It's not "irrespective" of the fact that the antenna has 50 ohm input impedance. The output amplifier impedance, trace / feedline characteristic impedance and antenna input impedance all need to be the same as each other, or if they are not then they need to be matched with a suitable matching network or balun.
Andy is completely right that over very short distances this can usually be disregarded.
I know this is a very old question, but whomever is interested, I actually think there is one important thing wrong in the previous answer. It was assumed that the speed of light within the PCB track is the same as in vacuum, which is not the case. If we take FR4 (7628), its relative permittivity is ~4.5, thus refractive index will be 2.12 (how many times slower the speed is in another medium, which is sqrt of relative permittivity). This changes the wavelength of e.g. 2.45GHz signal to 57.7mm (wavelength = speed / freq). 1/10 of that is around 5.8mm.
Meaning, if the transmission line is longer than 5.8 mm (or better, longer than 5 mm, since different thickness boards will give slightly different relative permittivities), then a matched network needs to be added, otherwise a matching network will only add loses.
UnexpectedMaker has fun series to watch about how he tried matching his TinyPicos on Youtube, which might give a bit more insight into how to do the matching and what happens when you make a poor design...
|
STACK_EXCHANGE
|
To learn how to use the "base" ServerTemplate as a starting point to develop custom ServerTemplates.
This tutorial describes how you can quickly develop custom ServerTemplates by starting with a "base" ServerTemplate that contains the minimum set of scripts that are required to support monitoring, alerts, logging, and audit entries. Instead of building a ServerTemplate from scratch, it's recommended that you start your development with the "base" ServerTemplate if you cannot find an existing ServerTemplate in the MultiCloud Marketplace (MCM) that more closely meets your needs.
Add a Repository
- Go to Design > Repositories.
- Select Add Repository.
- A screen will display with the fields that define your repository information.
- Select the Type to choose the repository you would like to use. Select just one of the following depending on the repository you are using:
- Once the information has been entered, click OK.
- Once added, RightScale will fetch the information from the repository and the cookbooks from that repository will appear when you refresh the page.
Note: If you notice the cookbooks are not appearing after a few minutes, you can click on the repository and go to the Info tab. The Last fetch output section will notify you if there were errors occurred during the fetch.
Import a Cookbook from a Repository
- Go to Design > Repositories.
- Click on a repository.
- Select the cookbooks in the repository. From the Action drop-down, select Import.
- From here you can choose to:
Import to a Primary or Alternate namespace - We recommend that if you use your own cookbooks that you import them to the primary namespace, but if you want to use an alternate namespace you can. When you select an alternate namespace, you can choose to follow the cookbook. When you follow a cookbook, the new or updated versions of that cookbook that are refetched into the repositories section will automatically get imported. For more information about following a cookbook, see Follow a Cookbook. For more information about namespaces, see Primary and Alternate Namespace.
Import Dependencies - If the cookbook relies on other cookbooks to run recipes, they will have dependencies. You can select this option so the cookbooks that the other cookbooks rely on will be imported. If the dependencies are missing, you will not be able to import them.
- When your options are selected, click Import.
Import and Clone the ServerTemplate
- Create a new deployment. See Create a New Deployment.
- Go to Design > MultiCloud Marketplace > ServerTemplates. Import the Base ServerTemplate for Linux ServerTemplate.
- Clone the imported ServerTemplate to create an editable HEAD version. Change the name of the ServerTemplate so that it more accurately describes the type of ServerTemplate you are going to build.
- Before you make changes to the ServerTemplate, it's recommended that you first commit the ServerTemplate so that the first committed revision (Rev 1) of the ServerTemplate matches the original version that you imported from the MCM without any changes. You can use a simple commit message such as "Matches original imported version. No changes." This way, you'll be able to easily perform a differential between the current HEAD version and the "original" version to see an exhaustive list of all the changes.
Attach a Cookbook to a ServerTemplate
- Go the Scripts tab of the editable HEAD version of the ServerTemplate.
- Click Modify.
- Click Attach Cookbooks.
- Find and select the cookbook(s), which you imported into the RightScale account in a previous step. If more than one version of the cookbook is listed, select the version that is in the "primary" namespace and click Attach Selected.
Add a Recipe to the ServerTemplate
You are now ready to add recipes from any of the newly attached cookbooks into one of the boot/operational/decommission sections of the ServerTemplate.
Add a Server to a Deployment
- While viewing the HEAD version of the ServerTemplate, click the Add Server button to create a new server in the deployment. Since you are going to make modifications to the ServerTemplate for customization purposes, you will want to launch a server with the HEAD (editable) version of the ServerTemplate so that you can easily add scripts to the ServerTemplate and test them on a running server without having to relaunch the server each time you make a change. For more information, see Add a Server to a Deployment.
Launch the Server
The "base" ServerTemplate only contains the bare minimum set of scripts that are common to all ServerTemplates published by RightScale, which support monitoring, alerts, logging, and detailed audit entries.
- Launch the server that you just created.
- Because there are no boot scripts that are missing any required inputs, scroll to the bottom of the page and click Launch. In a few minutes you will have an operational server. You can now click the server's Monitoring tab to view real-time monitoring graphs, set up alerts, and use the audit entries for troubleshooting information as you continue your ServerTemplate development.
Customize the ServerTemplate
Now that you have a running server the next step is to add new scripts to the ServerTemplate that you want to test. Since a server's scripts are defined by its ServerTemplate, you can easily add a new script to the HEAD version of the ServerTemplate and then execute it on the running server.
|
OPCFW_CODE
|
Manage application users and roles in Azure App
I am really new to Azure and honestly not so familiar with Active Directory since majority of my works were relying on different users and their logins completely saved in DB.
But I have got a new requirement for a little big application and its going to work in Azure.
Its an ecommerce application, but other than being a common ecom application, its little different. Each sellers can sell their goods from their own shop page. Customers never get a feeling they are buying from a common store like Amazon.
My concern is how to handle the users and their logins and transaction. I have got confused about Azure AD because many pages says Azure AD can be use for handling the users and roles.
So does it means I dont need to store user ID and roles separately in my table and Azure AD can handle those part??
Expecting kind help because I am a beginner..
Look into Azure AD B2C, the extra effort is worth it. You get logging, security, well-known login interface, multi-factor if you want it and a lot more. I promise you, the code you write and it's iterations, will take a lot longer time than using B2C, and the endresult is way better.
Thank you friend. So we dont need to think about storing the login information or managing the logins, isnt it? But the user registration through Microsoft then?? Or are we registering the user through our portal itself and storing the data in our DB, then enabling azure authentication for that user??
All aspects of maintaining user credentials is "taken care of" by B2C, you reference them by keys from claims. If I list up a lifecycle of user credentials: Creation, forgotten password, password change, multifactor, to many sign-in attempts, changing email address, and so on... You will quickly realize that B2C is a good option.
Other options are Identityserver or relaying on twitter/facebook/Microsoft/etc. credentials.
@RaymondA. THanks a lot.. Sorry one last question.. Wont steal your time anymore.. :)
So user registration also handles by b2c, isnt it? But that registration details I found I can customize as well. But how can I get those details stored in my DB. Not any login part. The data user entered in registration details, is that possible..
What you are looking for is Azure AD B2C.
It's service to support user credentials and authentication flows. Users can use the authentication flows to sign up, sign in, and reset their password. Azure AD B2C stores a user's sensitive authentication information, such as the user name and password. The user record is unique to each B2C tenant, and it uses either user name (email address) credentials or social identity provider credentials
you can follow this on how to setup
Thanks a lot friend.. Sorry one question, is there is any limitation on number of users or do we need to pay for each user to Azure? There wont be, but a question, if there are 100k users so the above B2C will work, isnt it?
Also missed to ask one more thing, if that way (b2C), we dont need to think about storing user credentials in DB or no need to worry about managing the login as well, isnt it?
You can find the pricing details here https://learn.microsoft.com/en-us/azure/active-directory-b2c/billing
@sanforall No you do not have to worry about storing user credentials. Navigate to the documentation link and you can get all the info
|
STACK_EXCHANGE
|
Solid Queue Integration
This adds an adapter to integrate with SolidQueue, reporting job queue time and busy metrics (if enabled) to Judoscale for autoscaling.
SolidQueue is currently on v0.3, still pretty early on, and there's still some things being figured out, but there's early adoption and we expect more as it becomes a Rails recommendation / default in the future. It only works with Rails v7.1+ and Ruby 2.7+, so that's what this adapter will support initially.
We'll be collecting queue time / latency via the "ready executions" table, and busy via the "claimed executions" table.
SolidQueue moves jobs between different tables as they change "status", in other words, while all jobs have a representation on the main "jobs" table, they also get a record on an associated table that may represent what's happening to them: when they're ready to be picked up for work, they go to "ready executions", when they're claimed by a process worker to be performed, they go to "claimed executions", and if there's a failure (that's not retired by Active Job), they go to "failed executions"; if they're scheduled to run in the future, they go to "scheduled executions" (or if they're being retried by AJ, which is essentially re-scheduling them in the future, until it succeeds or gives up retrying and blows up back to SolidQueue.)
When jobs are finished successfully, they are flagged with a "finished_at" column on the main "jobs" table. As the jobs moves from one to the other "execution" status in the workflow, their previous record is destroyed, so there should be really only one of those "execution" representations at one point in time. (i.e. a job is either scheduled, ready, claimed, failed)
There's also the concept of recurring executions, which are created via config (a cron-like setup), and eventually get added to "ready executions" for every recur.
And finally, there's one thing I have to look a bit more: blocked executions. It seems you can add concurrency limit to jobs, which may lock certain jobs from running (if they're concurrency limited by a certain condition) and will move them to a separate "blocked executions" table. I would like to test this more, because I'm wondering if we need to check this table for jobs in order to calculate queue time as well.
Todo / Questions
Investigate "blocked executions" / concurrency limits, to determine whether they should be added to the queue time / latency.
I've been playing with this some, and it works as you'd expect: you can setup a job with a concurrency limit, i.e. run only one job at a time, or one job with this set of arguments, or up to X jobs concurrently, etc., and if more jobs are enqueued, instead of going to "ready", they go to "blocked". When jobs are finished, they check for blocked jobs to unblock them, and there's also an additional dispatcher that checks for blocked jobs on a schedule.
While initially I thought it'd make sense to consider these for the latency calculation, the more I thought about and played with it, the more it came to mind that having a big list of blocked jobs doesn't mean a need to autoscale: you might simply be limiting the concurrency of those jobs to a point where many are getting enqueued at certain points, but just a few get processed due to the limits imposed. This could cause the blocked execution table to grow temporarily, causing those blocked jobs to have "increased latency", but autoscaling up might be wrong in this case, since more processing power won't make those jobs complete any faster -- they're still limited by their concurrency setup. In other words, I'm thinking that autoscaling should only look for jobs in the "ready execution" table initially.
Sample query I was playing with, for reference
::SolidQueue::Job
.left_joins(:blocked_execution, :ready_execution)
.merge(::SolidQueue::BlockedExecution.where.not({ id: nil }))
.or(::SolidQueue::ReadyExecution.where.not({ id: nil }))
.group(:queue_name)
.minimum("coalesce(#{::SolidQueue::BlockedExecution.table_name}.created_at, #{::SolidQueue::ReadyExecution.table_name}.created_at)")
While initially I thought it'd make sense to consider these for the latency calculation, the more I thought about and played with it, the more it came to mind that having a big list of blocked jobs doesn't mean a need to
I agree with your analysis of the scheduled executions table.
I guess one thing we'll need to consider is that if someone scales their workers down to zero, their scheduled/blocked executions will never run. That's more of an app UX consideration... just thinking aloud here.
I guess one thing we'll need to consider is that if someone scales their workers down to zero, their scheduled/blocked executions will never run. That's more of an app UX consideration... just thinking aloud here.
Makes sense, but I think that's a general consideration to have with job/workers that's not specific to SolidQueue, one could argue the same is true for Sidekiq for example, and it's unique enterprise feature.
|
GITHUB_ARCHIVE
|
Adding style sheets to html body. I' m adding trying to add an image html called lg. Adding CSS Style Sheets to HTML Open your text editor to write HTML code html adding for basic HTML structure. In this case distributing style rules throughout html the document will actually html lead to worse html performance than using a linked style sheet, since for most documents the style sheet adding will already be present in the local cache. These bookmarklets let you see how a sheets web page is coded without adding digging through html the body source experiment with CSS , , sheets debug problems in web pages quickly JS without editing the actual page. 2 Adding style to HTML. 01 specs, but I believe it is true for all versions of HTML) : “ This element defines a link. – Nicolas Barbulesco Jun 23 ' 16.
ods adding html body= ' temp. sheets Every time we’ ve specified adding style= sheets " ( something) ", we’ ve actually created a CSS rule. , to an HTML file. load body external css file in body tag [ duplicate]. To validate HTML when inside body tag . CSS is the language that tells web browsers how to render the different parts of a web page. I am using Sublime sheets Text here so I recommend you to use same text editor for similarity in work. Removes java flash, , background music third- party iframes.
This technique allows you to define a style sheet as a separate document and import it into your web pages. Adding Styles to HTML Elements. The advantages of this may not be immediately clear ( since the adding second form is actually more verbose) but the power of CSS becomes more apparent when the style properties are placed in html an internal style element , even adding better html an external CSS file. No, it is not okay to put a link element in the body tag. Although CSS ( Cascading Style Sheets) is a different language from. Every item adding or element on a web page is part of a document written in a markup language.
Web Development Bookmarklets. Embedded style — Using the < style> element in the head section of the document. CSS stands for Cascading Style Sheets and it is the language used to style the visual presentation of web pages. Any style sheet language may be used with HTML. sheets When you look at the code adding for externalStyle. CSS is body the language that tells body html web. body CSS3 supports external body style sheets. png is located in the same folder as the index.
See the specification sheets ( links to the HTML4. To see why this might be attractive, take a look at the example. css Tried looking online for this answer but can' t seem to get any luck. These are the three methods of implementing styling information to the HTML document ( from highest to lowest priority).
This tag is essentially an empty HTML container tag that does nothing until you tell it to do something by applying an attribute to it, such as adding the class, id, or style attribute to style content. So I' m working on a project that accepts HTMLs as inputs and returns them as outputs. All of the HTMLs I get as inputs have all of their text in divs and style sheets that dictate the style. HTML Styles - CSS Styling HTML with CSS.
adding style sheets to html body
CSS stands for C ascading S tyle S heets. An inline CSS is used to apply a unique style to a single HTML element. An internal CSS is used to define a style for a single HTML page.
|
OPCFW_CODE
|
from helpers.common_imports import *
import clean.schuster as sch
import clean.matrix_builder as mb
class Iterator(object):
"""iterates over the dirty spectrum and extracts clean one"""
def __init__(self, detection_threshold, harmonic_share, number_of_freq_estimations, time_grid, values, max_freq):
self.__harmonic_share = harmonic_share
self.__number_of_freq_estimations = number_of_freq_estimations
self.__time_grid = time_grid
self.__values = values
self.__max_freq = max_freq
self.__dirty_vector = mb.calculate_dirty_vector(
self.__time_grid, self.__values, self.__number_of_freq_estimations, self.__max_freq
)
dirty_subvector = self.__dirty_vector[number_of_freq_estimations:]
# eq 152 in ref 2
self.__detection_threshold = detection_threshold
self.__window_vector = mb.calculate_window_vector(
self.__time_grid, self.__number_of_freq_estimations, self.__max_freq
)
def iterate(self, max_iterations):
"""iterator: steps 7 to 17 pp 51-52 ref 2"""
super_resultion_vector = self.__build_super_resultion_vector()
current_step = 0
dirty_vector = self.__dirty_vector
while current_step < max_iterations:
result = self.__one_step(super_resultion_vector, dirty_vector)
if not result:
break
else:
dirty_vector = result['dirty_vector']
super_resultion_vector = result['super_resultion_vector']
current_step += 1
result = {
'super_resultion_vector': super_resultion_vector,
'iterations': current_step
}
return result
def __calculate_complex_amplitude(self, dirty_vector, max_count_index, max_count_value):
"""eq 154 ref 2"""
window_value = self.__window_vector[2*self.__number_of_freq_estimations:][2*max_count_index][0]
nominator = max_count_value + np.conj(max_count_value)*window_value
denominator = 1 - sch.squared_abs(window_value)
return nominator/denominator
def __extract_data_from_dirty_vector(self, dirty_vector, max_count_index, complex_amplitude):
"""eq 155 ref 2"""
# min_index corresponds to -m-th index in eq 155 ref 2 for W
min_index = self.__number_of_freq_estimations
# +1 since last value is not included, when the subarray is extracted
# max_index corresponds to m-th index in eq 155 ref 2 for W
max_index = 3*self.__number_of_freq_estimations + 1
window_vector_left_shift = self.__window_vector[
min_index - max_count_index:max_index - max_count_index
]
window_vector_right_shift = self.__window_vector[
min_index + max_count_index:max_index + max_count_index
]
difference = complex_amplitude*window_vector_left_shift + np.conj(complex_amplitude)*window_vector_right_shift
result = dirty_vector - self.__harmonic_share*difference
return result
def __add_data_to_super_resultion_vector(self, super_resultion_vector, max_count_index, complex_amplitude):
"""eq 156 ref 2"""
#self.__number_of_freq_estimations index corresponds the 0th index in eq 156 ref 2 for vector C
vector_to_add = self.__build_super_resultion_vector()
vector_to_add[self.__number_of_freq_estimations + max_count_index] = self.__harmonic_share*complex_amplitude
vector_to_add[self.__number_of_freq_estimations - max_count_index] = self.__harmonic_share*np.conj(complex_amplitude)
result = vector_to_add + super_resultion_vector
return result
def __get_max_count_index_and_value(self, old_dirty_vector):
"""gets max count index and value"""
dirty_subvector_wo_zero = old_dirty_vector[self.__number_of_freq_estimations+1:]
# we need to add 1 to the index, because our dirty_vector index has different indexing:
# from -number_of_freq_estimations to number_of_freq_estimations
max_count_index = sch.calc_schuster_counts(
dirty_subvector_wo_zero, method_flag='argmax'
)[0] + 1
max_count_value = dirty_subvector_wo_zero[max_count_index - 1][0]
result = {
'index': max_count_index,
'value': max_count_value
}
return result
def __one_step(self, old_super_resultion_vector, old_dirty_vector):
"""one step of the iteration process"""
max_count_index_and_value = self.__get_max_count_index_and_value(old_dirty_vector)
if sch.squared_abs(max_count_index_and_value['value']) >= self.__detection_threshold:
# eq 154 ref 2
complex_amplitude = self.__calculate_complex_amplitude(
old_dirty_vector,
max_count_index_and_value['index'],
max_count_index_and_value['value']
)
dirty_vector = self.__extract_data_from_dirty_vector(
old_dirty_vector,
max_count_index_and_value['index'],
complex_amplitude
)
super_resultion_vector = self.__add_data_to_super_resultion_vector(
old_super_resultion_vector,
max_count_index_and_value['index'],
complex_amplitude
)
result = {
'dirty_vector': dirty_vector,
'super_resultion_vector': super_resultion_vector
}
return result
else:
return None
def __build_super_resultion_vector(self):
"""eq 151 in ref 2"""
vector_size = mb.size_of_spectrum_vector(self.__number_of_freq_estimations)
return np.zeros((vector_size,1), dtype=complex)
|
STACK_EDU
|
Uber-geek [James McLurkin] was in Austin recently demoing his robot swarm. He’s on tour with EDA Tech Forum. [McLurkin] has multiple degrees from the MIT AI lab and worked at iRobot for a couple of years. Lately, he has been working on distributed robot computing: robot swarms.
[McLurkin] was an entertaining speaker and had an interesting view of robotics. He is optimistic that robot parts will become more modular, so it will be easier to build them, and more importantly, faster to design them.
- “There’s more sensors in a cockroach’s butt than any robot”
- “12 engineer years to design, 45 minutes to build”
- “If it can break your ankle, it’s a real [rc] car.”
His swarm (pictured above) is made up of over a hundred small identical bots, but he only brought about a dozen with him. The demo was still quite impressive. He had the robots spread out, clump together, play follow the leader and circle the wagons. Each behavior had a very simple rule behind it. To spread out, for example, each robot tries to move away from it’s nearest neighbor. The really fun part was when he had the robots perform a physical bubble sort. The rule for this was that each bot tried to put a higher-id bot on one side and a lower-id bot on the other. After a minute or so of bumping around the bots all lined up in id order.
I was interested in the details of the robot itself. Here’s a picture with the parts labeled.
Each robot has a unique ID number. They communicate with each other via IR and have sensors so that they can tell which direction and how far away the other bots are. The lights on top are just indicators so you can tell what the bots are doing. A mesh network is rebuilt several times a second, creating a directed graph from the ‘leader’ (which can be any arbitrary bot) that connects to each bot in the swarm. Any bot can act as a repeater, relaying instructions to bots that can’t talk to the leader directly.
Robot swarms are not a new idea: they’ve been floating around as concepts for many years. However, [McLurkin] was one of the first to actually build and program a large swarm (at one time he held the record for the largest robot swarm in the world). The idea caught on with researchers and today there’s even an open source robot swarm project. If you’re not up to building a whole bunch of robots, there are also simulators.
After the demo, we asked [McLurkin] about the cost of the robots. He said he didn’t know for sure, but estimated at least $2000 per bot. When we commented that “that’s a lot of money for 100 bots”, he pointed out that compared to the $20K+ that research robots can go for, it’s a bargain. He also said “This whole new world of hobby robotics just didn’t exist in the 90’s”. For robots to be deployed in swarms of hundreds or even thousands, in situations where they can get damaged or lost (search and rescue, military exercises) the cost will need to drop dramatically.
Here he is packing up his robot swarm. After the demo, we half expected them to pack themselves – no, they don’t.
For more info on robot swarms, their inspiration and possible uses take a look at [McLurkin]’s web site.
4 thoughts on “Swarm Robotics”
Any hope of seeing video of the physical bubble sort? I am certain that it would be quite entertaining…
Joey: yep, on the website — http://people.csail.mit.edu/jamesm/swarm.php#videos
He’s a great speaker. I saw him at one of the SD West 2008 keynotes, and he had about 15 of the robots with him. He gave an impressive little demo (likely the same speech you saw,) and it was probably the most entertaining lecture of the week.
I shot a few minutes of crappy video with my phone, but it proved to be useless.
If any of you get the chance to see him live, it’s worth it. A hacker with a personality!
Thank you allot, for sharing the great post!
Here, I found a youtube video about xbox live hacks- that I would like to share: xbox Live hacks…
but seriously, great post and thanks so much !!
I look forward to your next article !!
Please be kind and respectful to help make the comments section excellent. (Comment Policy)
|
OPCFW_CODE
|
Is this the best response we can come up with?
I understand why questions seeking pastoral advice are off topic, and on the whole I support the position. But I'm not satisfied by the way that some people who have come with questions that are out of scope for this reason get answered the way they do. Personally, I'd be much more comfortable if we had a question on the site in the class, "Where can I go for pastoral help with my problem?" and an answer which could be amended, which would provide links to pastoral advice resources, in a general form, "If you're Catholic...", followed by a list of resources, and similar constructs for other flavors of Christians--Lutherans, Calvinists, Orthodox, &c. People who raise pastoral advice questions, then, could be directed to this question and perhaps find a place for answers and support consistent with their current beliefs.
OK I'll admit it, a very small part of proposing this is to ease my own sense of guilt; but I do worry about that some of the people who are in need of this kind of help are being uncharitably turned away. Even if they should have a better source of assistance than some anonymous people on the internet, couldn't we do a better job of directing them to that help?
For the most part, the lists for X denomination would consist of "clergy from X denomination, or people you trust," would it not? Would there be any significant exceptions to that? Am I misunderstanding your question?
Well, the lists might include of clergy, or not. I know that there are forums and email lists on line for different communities of Christians, which might or might not include clergy, I don't think you misunderstood my question. I'm not totally opposed to telling people "there's no room at the inn for that", I just think it would be a bit more charitable to people who are hurting not to stop with that response, and be able to say, "try there."
If the problem with pastoral advice questions is that such advice from strangers on the internet is deficient, wouldn't we be doing a disservice by directing people to other strangers on the internet? I share your concern, however, when it comes to people without access to people who can really help them. I'm thinking we might want to put together resource lists or guides specifically for 1) people being abused, 2) un-churched and socially isolated people, and 3) people with a major problem within their church that needs outside help. What do you think?
If we do anything like that, we'd better be willing and able to scrupulously vet any places we send people to, and keep an eagle eye on it to make sure it doesn't get edited to include links to questionable sites and organizations. Once we start directing people to specific sites and organizations, we take on a certain amount of responsibility for what happens when they get there.
@LeeWoofenden does bring up a good point. But I think it could be addressed by a disclaimer on the order of "we don't know, or endorse the people at these links, but here's a possibility they might be able to help you. And as fredsbend suggests, I too would rather encourage people seek help from real people. But is the choice really between help from real people and no help? I suspect they are asking the questions here because they don't know any real people they trust enough that they can ask for an answer to their question.
If advice from strangers on the internet is deficient, is no answer better? Remember the man who fell among robbers in Jesus' parable? His help ultimately came from a stranger?
Yes, that is the best response we can come up with.
The problem with allowing "those of us who know the truth" (and we all think it's us) to give pastoral advice is that we then have to allow heretics with incorrect doctrines and false beliefs can give horrible advice that leads the new user astray.
As I said in Another reason this is not a Christian site
Would you really want this to be the place for a potential converts or new believers to learn about Truth?
I wouldn't. Assuming that there is one Truth, this isn't the place to
find it. We have many active users, all with different backgrounds
and beliefs. Since this site is meant as a place for sharing ideas,
any question a seeker asks will likely have many different answers,
all of which the answerer thinks is the Truth. A seeker will find
nothing but conflicting answers and confusion here. To avoid
confusing people on topics that may have, as Christianity believes,
eternal consequences, it's best to clarify that this site is not meant
to be a place to find that kind of Truth.
At least in person, there's the chance for the seeker to look the person giving them advice in the eye, ask follow-up questions, and read body language. It's hard for a BS detector to work with no visual cues to work on. See also How are “Strangers on the Internet” less legitmate than people you meet in real life?
I upvoted your answer, but that doesn't mean I agree with it. Remember that for the people hearing the story at it's first telling, to the man who fell among thieves, the "Good" samaritan was a heretic or a pervert! And I'm less concerned in my question with people who are seeking "the Truth", than those who are tormented by hurt, grief or other assaults on the body or the soul. My soul is disquieted by turning away those people with only a "we don't do that here". It seems to me too close to the reaction of the Priest and the Levite in the parable cited.
|
STACK_EXCHANGE
|
"He who loves practice without theory is like the sailor who boards ship without a rudder and compass and never knows where he may be cast." -- Leonardo da Vinci, 1452-1519
"The secret of the demagogue is to appear as dumb as his audience so that these people can believe themselves as smart as he is." -- Karl Kraus, 1874-1936
"[R]eality must take precedence over
public relations, for Nature cannot be fooled." -- Richard
P. Feynman. 1918-1988
"Ignorance more frequently begets
confidence than does knowledge." -- Charles Darwin.
"Certainement qui est en droit de vous
rendre absurde est en droit de vous rendre injuste." --
Voltaire (Francois-Marie Arouet) 1694-1778
Regression Graphics: Added-variable and
component+residual plots, as implemented in the car and
effects packages for R (November 2022, CANSSI Statistical Software
Conference): slides, R
Regression Diagnostics (May 2022, SORA/TABA Workshop)
Introduction to the R Statistical Computing Environment Lecture Series (July 2021, cancelled, ICPSR Summer Program)
Webinar: Using the R Commander in Basic Statistics Courses (March 2018, American Statistical Association/Teaching of Statistics in the Health Sciences Section)
Visualizing Simultaneous Linear Equations, Geometric Vectors, and Least-Squares Regression with the matlib Package for R (June 2016, useR! Conference, Stanford)
Structural Equation Modeling with the sem package for R, Writing R packages, Teaching Statistics Using R and the R Commander (January 2016, IQS Barcelona, Spain)
Linear Models, Logit and Probit Models, Generalized Linear Models (June 2010, York Summer Programme in Data Analysis)
to Nonparametric Regression (May 2005, ESRC Oxford Spring
Note: To reduce duplication, I've listed the most recent occasion on which I addressed a particular topic.
Information on John Fox, A Mathematical Primer for Social Statistics, Second Edition (Sage, 2021).
Information on John Fox, Regression Diagnostics: An Introduction, Second Edition (Sage, 2020)
Information on John Fox and Sanford Weisberg, An R Companion to Applied Regression, Third Edition (Sage, 2019), including access to on-line appendices, data files, R scripts, errata, updates, and more.
Information on John Fox, Using the R Commander: A Point-and-Click Interface for R (Chapman and Hall/CRC, 2017), including access to data files, errata and updates,
Information on John Fox, Applied Regression Analysis and Generalized Linear Models, Third Edition (Sage, 2016), including access to appendices, datasets, data-analysis exercises, errata, answers to odd-numbered exercises in the text, and bonus Chapter 25 on Bayesian estimation of regression models and Chapter 26 on causal analysis of observation data.
Information on John Fox, Nonparametric Simple Regression: Smoothing Scatterplots (Sage, 2000), and Multiple and Generalized Nonparametric Regression (Sage,
Information on Robert Stine and John Fox, eds. Statistical Computing Environments for Social Research (Sage, 1996).
candisc (R package for canonical discriminant analysis), by Michael Friendly and John Fox.
car (Companion to Applied Regression) package for R. Software associated with Fox and Weisberg, An R Companion to Applied Regression, Second Edition.
effects (R package for effect displays), by John Fox, Sanford Weisberg, and Jangman Hong.
heplots (R package for visualizing hypothesis tests in multivariate linear models), by John Fox, Michael Friendly, and Georges Monette.
ivreg (R package for instrumental variables regression), by John Fox, Christian Kleiber, and Achim Zeileis.
matlib (R package for learning linear algebra), by Michael Friendly and John Fox.
polycor (R package for polychoric and polyserial correlations).
Rcmdr (R package, a basic-statistics graphical-user-interface for R).
RcmdrPlugin.survival (Rcmdr plug-inpackage for survival analysis).
sem (R package for structural equation modeling), by John Fox, Zhenghua Nie, and Jarrett Byrnes.
Some additions to Cook and Weisberg's Arc software.
Some of the information on this web site is in the form of Portable Document Format (PDF) files. A free viewer for PDF files, Adobe Reader, is available from the Adobe web site.
Contact me by email: jfox AT mcmaster.ca
Last Modified: 2023-07-04 John Fox, jfox AT mcmaster.ca.
|
OPCFW_CODE
|
I can't imagine John is driving a car
Are the following sentences correct? and what is the difference in meaning?
I can't imagine John drives a car.
I can't imagine John driving a car.
I can't imagine John is driving a car.
I think the middle one is correct, but I don't understand its part of speech, tense and clause elements, and I think the other two are wrong, in the same I can understand it grammar and tense
Would you please tell us what you think about these sentences yourself?
Actuality, I think the middle one is correct, but I don't understand its part of speech, tense and clause elements, and I think the other two are wrong, in the same I can understand it grammar and tense
Fine, you can add this to your question. BTW, some verbs take both infinitives and present participles and in my opinion both "drive" (without s) and "driving" can be used with the verb "imagine". However, I am a learner like yourself. Let's wait for the native friends.
@Cardinal it has to be "drives"; "drive" doesn't work.
@Kat <strikeout>I don't think "drives" is right in this context; the only ones that seem right to me (as a native speaker) are "driving" and "that John is driving" (the latter of which isn't in the question), both of which mean slightly different things.</strikeout> Edit: Considering them as different tenses, they all work.
@wizzwizz4 "drives" sounds kind of British to me, and I probably wouldn't use it personally. My main point was "drive" is not grammatical.
@Kat what about this: "I watched john climb the wall"?
@Cardinal that's fine. It doesn't work if you swap "watched" with 'can't imagine' though. Even "I imagined John climb the wall" is strange to me. I think it's because imagining is hypothetical, but I'm not sure. You could ask it as a new question to get a more technical response.
Well, what else would John drive?
All of them are grammatically correct, and I can imagine using all of them in different situations.
I can't imagine John drives a car. The use of the simple present tense implies something that is factual or habitual, so this means "I can't believe that John regularly or habitually drives a car. It might be used in a context like this:"I need someone to drive me to the train station tomorrow. Do you think John could drive me?"
"I can't imagine John drives a car. He lives downtown where there's no parking, he's always talking about how awful cars are, and he's as poor as a church mouse anyway."
I can't imagine John driving a car. The use of the present participle implies the action of driving a car. I would use it like this:
"Who will drive the car tomorrow? John?"
"Ha! I can't imagine John driving a car. He gets confused by anything more technologically complicated than a toaster."
I can't imagine John is driving a car. The use of "is driving" implies that John is driving the car right now as we speak. I would use it like this:
"I heard that John is going to Bakersville today. Is he driving there?"
"I can't imagine John is driving a car. It's a long way and he doesn't know the roads, so he's probably taking a bus."
Isn't John driving.. an example of a reduced relative clause?
@user178049 No: John driving a car can't be expanded to relative clause because there's no 'gap' (missing constituent) which a relative's referent could fill and yield an independent clause. It's a gerund clause (or gerund-participle clause, if you follow CGEL).
Shouldn't it be "I can't imagine that John drives a car" and "I can't imagine that John is driving a car" ...? And I thought a gerund clause would always have ing in it somewhere.
@JohnWu, "that" in that meaning can usually be omitted without changing the meaning of the sentence (although in some cases omitting it can make the meaning ambiguous).
Is it right to write "I can't imagine John drives a car."? I am talking about the difference between writing and speaking. Also, I think we can say "I can't imagine John's driving a car.", in which the 's is a genitive apostrophe.
@user178049 No, John driving a car is a gerund.
@Cardinal All three sentences are valid in both speaking and writing and stangdon's explanation of the differences holds in both modalities. I think there are people who would use "I can't imagine John's driving a car." with a genitive apostrophe. Their meaning would likely be like the 2nd example. But note that the apostrophe could also indicate a contraction of "is driving" so then this is the same as the third example.
in "...John is driving a car" I think the emphasis changes to the remarkability of driving a "car," as opposed to bicycle, or motorcycle, etc. In my version, John usually would not be driving a car, for whatever reason, normally, but now is doing so.
I read “…John driving…” with a participle, not a gerund. I can imagine John bald, I can imagine John in a star-spangled top hat and clown shoes, but I cannot imagine John (a-)driving a car: these two phrases are parallel, and their head is John.
|
STACK_EXCHANGE
|
From: Matthias Hoys <matthias.h...@gmail.com>
Date: Tue, 15 May 2012 07:30:17 -0700 (PDT)
Local: Tues, May 15 2012 10:30 am
Subject: Re: Slower performance after enabling async io on Oracle Linux
On Tuesday, May 15, 2012 4:07:47 PM UTC+2, Matthias Hoys wrote:Problem solved: I saw the following in the listener.log file: "WARNING: Subscription for node down event still pending". The solution is to add "SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF" to the listener.ora file, and restart the listener. Apparently, the listener service constantly tries to contact a RAC service, but this is a non-RAC installation. It was already a known issue with Oracle 10g. Way to go, Oracle ;-) Let's see if this improves the async io performance.
> On Tuesday, May 15, 2012 3:16:46 PM UTC+2, Matthias Hoys wrote:
> > On Tuesday, May 15, 2012 2:52:33 AM UTC+2, Mladen Gogala wrote:
> > > Matthias, I the problem may as well lie on the network side. Check netstat
> > > -s for timeouts, retransmits or packets being dropped. Depending on the
> > > hardware, you may want to use jumbo packets, increase rmem and wmem
> > > parameters. Below are two articles about tuning the network:
> > > Problem with Linux is that it is not well instrumented, there is no wait
> > > So, if you want a recipe, measure the duration of the system calls by
> > Using netstat -s, I think I found something:
> > Tcp:
> > I'm no network specialist, but it seems like there are a huge amount of "failed connection attempts"?? Or is this normal for iSCSI, I never worked with iSCSI storage before...
> > Btw: we don't have a local disk, everything is on VMWare and the disks are on the iSCSI box.
> > Thanks for the help,
> Update: the Oracle Listener is causing all the "failed connection attempts". When I stop it, no more failed connection attempts. I'm now further debugging this...
You must Sign in before you can post messages.
To post a message you must first join this group.
Please update your nickname on the subscription settings page before posting.
You do not have the permission required to post.
|
OPCFW_CODE
|