Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
huh? it was my understanding that linux does exactly as you stated its should, use as much ram for cache as possible. heck, ive got 77% of my 3.5 gb (really 4gb, but only seen as 3.5 under 32bit ubuntu 7.10) used as cache now, and 16% used for programs. the argument for vista being a memory hog is not in its prefetch/caching but in that it uses an insane amounts of committed memory for nonsense stuff. take the sidebar for example. a coworker and i were working on a customer's pc which had vista installed on it. we compared the memory usage with the sidebar vs. immediately after closing the sidebar. it was a difference of about 400mb. im about 99% sure that was the committed memory. just to make sure that you realize that im not a hardcore linux fanboy here, i would like to point out that you can run xp on 400mb (in practice that would be rounded to 512mb, but still fairly close when you consider what we are talking about here)fairly well (for general email/word processing, and that would be rounded to 512mb in practice, but that still pretty close considering the discussion). thats just to point at how much microsoft bloated their own product. running an entire OS smoothly inside of what it takes to run just (what should be) a small addon.
and for your brief rant against teh average
Vista already does aggressive caching and makes full use of RAM that isn't currently being used by applications, but slashdot keeps going on about how its a bloated piece of crap that uses 2GB of RAM when idle. Yet they don't complain that their system runs a lot smoother thanks to prefetching which analyses program usage and preloads (in the background) data that it anticipates being loaded from disk in the future.
you seem to imply that most
/. users are running vista? or that we are happy cause we use linux which requires less memory than vista, thus allowing more of the memory to be used for active programs/cache? after reading your post a little more closely, i cant seem to make heads or tails of it...
Do you want your ram to sit idle the rest of the time, and have your hard drive grind away because /. would rather see the OS use 100mb of ram at idle and have the rest doing nothing?
this seems to contradict what you said earlier... first you say that we are happy because of how our system manages memory, then criticizes us for running an OS which does not manage it well?
and no, we dont want our memory wasted... which is why most of us run linux (or at least xp rather than vista), because the OS does not require as much memory, again, allowing for a greater percentage of the memory to be free for general use rather than backend stuffs, and (in the case of linux, not xp), it uses whatever is free after the committed memory for cache and whatnot...
so in a short, you seem to be criticizing the average
/. user for using an os that does not use the full potential of the system memory, then criticize us for criticizing vista which (you claim) does. in actuality, most of us use an os which does use the full potential of the system memory (*nix), then criticize vista for needing so much memory to run, much less have some left over for caching etc.
i dont think i said that as clearly as i could have, but i think you get the point. also, i may have completely misunderstood your post or how the different OSes manage memory (im a bit of a n00b), so *to all
/. readers* feel free to correct me on anything ive said.
|
OPCFW_CODE
|
Dear Community members,
join us for our next Meetup on February 26th, 2019 starting at 6pm in our Camunda HQ in Berlin. There will be lots of time for open discussions and we encourage you to bring own questions. Along with the presentations and networking opportunities, we’ll also provide food, as well as alcoholic and non-alcoholic beverages. The agenda and speaker information can be found below. We hope to see you there!
6:00pm: Doors open
6:00pm - 6:30pm: Catering/Snacks, Drinks and Networking
6:30pm - 6:45pm: Introduction to Camunda
6:45pm - 7:30pm: Hans Hübner & Robert Peckett, LambadaWerk
7:30pm - 8:15pm: Daniel Meyer, Camunda
8:15pm-end: Networking and drinks
Hans Hübner & Robert Peckett, LambdaWerk
Hans has three decades of experience as a software developer and development manager in European and US enterprises. Before joining LambdaWerk, he architected and implemented a new EDI subsystem for a multi-state Dental healthcare MCO in the US.
Robert has a BSc in Software Engineering and spent eight years developing and refining Virtual Learning Environments. At LambdaWerk, he streamlines, automates and improves production frameworks and controls, making our services especially robust and secure. Rob is a keen gamer and an avid collector of vintage consoles and games form the 80s and 90s.
Process Automation with Camunda and Clojure
Camunda BPM is a great toolkit for developing process modeling and automation solutions. LambdaWerk has chosen it as the platform for automating its file processing business solutions for the healthcare industry. Camunda is very developer friendly, but for many of the automation tasks that LambdaWerk has to implement and run, the full capabilities of the BPM engine are not required. Instead, we mostly use simple building block tasks that are combined to implement a given business automation process.
In our presentation, we will demonstrate how we use our own custom external task executor, written in Clojure, to implement these higher-level building blocks. Using Camunda Modeler templates automatically generated from our executor source code, we can easily extend the catalogue of tasks that are available to a business process implementer. We completely remove the requirement to write code when implementing standard processes and thereby give ownership and control over the running business process implementations to the business process developer.
Daniel Meyer, CTO at Camunda
Daniel joined Camunda in 2010, first as a developer and consultant, then technical lead and engineering manager. Later he became VP of Engineering and eventually Chief Technology Officer.
Fireside chat with Daniel Meyer
This time we'll try out something new: a fireside chat with Daniel. This will be a moderated Q&A session, enabling you to ask the questions that always crossed your mind when it comes to Camunda.
Please head over to slido.com (https://app.sli.do/event/wv1xwmvk/questions) or use #Camunda on www.slido.com to add your questions. You can see what others are asking, and can also ‘like’ questions if you’re eager to see them discussed – the more votes a question gets, the more likely it will be put to Daniel. Björn will organize the questions before the event, but you can also use the tool during the event to pose new questions. This should be fun!
|
OPCFW_CODE
|
One of my favorite bloggers, JP Boodhoo, has a little saying he puts at the end of every post:
Develop with passion!
I’ve only met JP once, and it was quite obvious from that meeting that passion is somewhat of a mantra for him. It’s also something the best career counselor offered me years ago: “Find what you’re passionate about, and do that”. When it comes to passion and programming professionally, I’ve seen two types of passion:
- Passion for the craft
- Passion for the domain
I’ve met far too many programmers that have passion, but only for the craft. When the business would ask for some feature, their passion would lead to a technically interesting solution, but one that may or may not have solved the original problem. This could be solved with deeper analysis of course, but what would lead that developer towards deeper analysis? What motivates them to do so? If all the developer is looking for is technical success, they will only reluctantly care about the domain problem, and only enough to get back to the technical problems.
We’ve all been guilty of this at one time or another. Twisting requirements, hedging estimates to make a fun technical solution turn into a no-brainer business decision. We’re doing a great disservice to our employers with these actions. I believe this strong motivation comes from a lack of passion for the domain.
One pattern I’ve started to notice lately is that continued success requires passion. Without passion comes little sustained success. Short-term success may come by accident, but it’s far easier to be unlucky in success than it is lucky.
When we apply our passion, we can’t put all of our eggs in the technical basket. Domain-driven design talks quite a bit about the Ubiquitous Language, and how it is important for developers and domain experts to share the same language. This language needs to come from the domain, not technical patterns and such. Without passion for the domain, our organization will never achieve complete success.
When we’re in conversations with the business analysts about specific stories, working through the motivation behind the story, passion for the domain creates a two-way conversation. Otherwise, it’s two people speaking different languages, trying and failing to translate in and out of technical mumbo-jumbo. “So you want to use an ESB for that?” “All I want is to not have to work these orders manually” “So you want an ERP system?” “What?”
Eventually the business analyst caves to technical demands, hoping that they solve the business problem at hand. Since they aren’t the technical expert, the BA puts their trust in the developer. Without passion for the domain, their trust is truly misplaced.
I truly believe it is a developer’s job to bridge the gap between technology and business. Success in bridging this gap requires both passion for the craft and passion for the domain.
|
OPCFW_CODE
|
Dear Ranchers, iam happy to say that i have cleared my XML certification today. Actually the exam was very very easy and nothing to worry about it.I finished my exam in 58 minutes and i have marked 22 and had lots of time to review the questions.No one need to worry about time. My personal experience is that all has to write the exam and pls dont follow the messages that the exam is tough and no time at all like that... i do agree that there are lots of scenario based questions and many are simple and easy to understand and spot the answer correctly.We have to keep our head cool adn when there are some big questions pls mark it and go to the other question so that we can complete the first round ASAP. some questions are from IBM mock exams very simple questions from DOM and SAX.I was really worried about this one because i haven't did any examples in DOM/SAX but the questions are very simple. before going to the exam pls read all the messages posted by the ranchers, that adds value.I studied for 3 months part time only. thanks once again java Ranchers All the best for all others those who are preparing for certification. Vasan
Congratulations vasan. Ranchers as I said earlier, there is nothing to fear about the exam. Go get the certification. vasan thanks for posting the favorable review. It may be useful if you can remember some of your exam questions Best of luck for the future.
Congrates Vasan! It is a wonderful score. After reading one of the post earlier, I rang up Prometric, to delay my exam by 2 weeks. But your post has bolstered my spirits. That would be more helpful if you can shed some light on XML Schema, XSL and DOM/SAX Questions.
Congratulations on your achievement Vasan. It is refreshing and reassuring to hear yet another testimony about the manageability of the exam after having read a number of "scare" stories in the past. Of course, opinions can vary greatly on such matters and I think it really pays to listen to all. I'm new to this forum and am just starting to prep for 141, but I passed the scwcd last week and I relied heavily on the advice and opinions of those who'd gone before Thanks again for sharing your thoughts and good luck in future endeavors.
Congratulations Vasan. Vasan and Satya, your messages are very encouraging. Could you roughly mention as to how many questions came from (SOAP, WSDL, UUDI, XSL-FO). My question is "can we take them in a lighter sense" if we are thorough with the main topics like (DTD/Schema, XSLT, DOM/SAX) and the scenario based things. Thanks.
Joined: Oct 16, 2002
Sorry, had to delete the questions that used to be in this post. Please remember - do not post questions from the actual exam! Hi Jayadev, In the exam I had a question on SOAP, which I think very basic. Deleted The exact wording may be different but the content is same. No questions on WSDL. Deleted The options might be slightly different. Deleted I can say this is hardly a UDDI question.
These are all the questions I got on those technologies (SOAP, WSDL, UDDI, XSL-FO). I may not be authoritative enough to say that “take them in a lighter sense", which would be unfair. However I myself had concentrated on other things and worked out successfully. Cheers Satya [ October 20, 2002: Message edited by: John Wetherbie ]
Joined: Mar 25, 2002
Thanks Satya. Your elaborations are very helpful.
Joined: Sep 08, 2002
Hi all, for me also the same two questions that satya mentioned earlier came. also there are some scenario based questions in UDDI like for fast searching which one will u use UDDI XQL XSLT then questions in well formedness, many in schema but all are simple (read from xfront.com) and some examples from xpath and xslt (zvon.com) then...what is the disadvantage of DOM (obvoius memory) very very basic questions only. Those who have already certified can u please tell me where to get the wallet and other details. Right now they gave me only the marksheet. Vasan
Nice shot Vasan We ain't got many rules 'round these parts, but we do got one. Please change your displayed name to comply with the JavaRanch Naming Policy. Thanks Pardner! Hope to see you 'round the Ranch!
|
OPCFW_CODE
|
Are enterprise architecture teams hopelessly outdated and anachronistic in an agile and service-oriented IT world? The verdict is in, and the answer is yes. It's time to pull the rip-cord, shut them down, and send the folks into the trenches. Like centralized planning in a Stalinist regime, most folks now see that heavyweight enterprise architecture was horrendously hide-bound, autocratically inflexible, almost comically out-of-sync with the business, and worse, far too slow to ever respond to the real needs of business units and their unvariably unique requirements. It's perfectly clear that enterprise architecture won't survive in its present form much longer; sucking dollars out of the IT budget and taking the best talent while producing little in the form of actual value.
However, we also can't forget that we have to practice enterprise architecture in a world where businesses themselves are now often defined primarily by loosely coupled federations with other businesses via automated processes that use service-oriented techniques to form their value chains (i.e. like Amazon and its 50,000 partners linked together via web services, ADP and their payroll services, etc.). In this new service-oriented world, companies now realize they can seriously cut costs and increase overall value by building new composite systems out of their own services, and that of other companies, instead of building or buying yet another application silo. Service-oriented architecture achieves the long-sought after goal of enterprise application integration, provides real reuse of value, and solves myraid other classical IT problems in one fell swoop. Does this emergent focus on service-orientation and enterprise architecture give EA teams one final new lease on life as the enablers of the service-oriented enterprise. Or does it finally make them completely irrelevant?
Figure 1: The future of enterprise architecture: Agile, Non-Centralized, Accountable, Service-Based
Certainly, some folks, like Scott Ambler, have recently been advocating a more agile version of enterprise architecture. In most realizations of agile enterprise architecture, there is a considerably reduced central planning aspect, which was formerly concerned mostly with authoring master models of the enterprise architecture and creating piles of write-only documentation. Agile EA has more to do with working with people, coaching, and getting out in the trenches and mentoring developers on practical architecture techniques and developing architecture skills. Stephen Cohen, for example, has been doing an admirable job of describing modern EA recently, but you can clearly see the us vs. them mentality with his safari approach.
In the end, the common complaint with enterprise architecture that most stakeholders outside the EA team have is that EA is notoriously unresponsive to specific needs, too difficult and expensive to comply with, is needlessly complex, and too general purpose. EA also frequently prescribed inapproriate, outdated or excessively bleeding edge technologies. There is also the realization that the growing adoption of agile processes may not be compatible with traditional EA and might never support the use of formalized enterprise architecture. Finally, many folks in the organization just don't have the skills or tools to support what EA groups usually pump out: ivory tower architectures that were never validated in the field other than a pilot or two.
In any case, the answer is that service-orientation won't give EA teams one more chance, in fact, I think service-orientation will likely finish them off. Coupled with the widespread perception that central enterprise architecture groups have been a rampant failure and don't deliver – and with central control essentially being eliminated by pervasive service-orientation – and we'll probably watch EA evaporate as going concern before our very eyes. The only wrinkle comes in with the ever-growing demand that companies build open service-oriented organizations. A demand that is pushed by increased competition, globalization, and need for more efficient (re)use of resources. Over the next few years, to fuel increasing collaboration, supply-chain integration, demand for web-facing delivery of services, we will find that many organizations will be forced to deliver their enterprise architecture in the form of easily consumable, non-visual, and open services. By looking at some of the emerging models for these efforts, I think we can begin to see the outlines of what enterprise architecture organizations will look like for the rest of this decade. We have to remember the classic IT antipattern: Not invented here. All too often our industry comes up a priori solutions and either ignores or fails to recognize existing, successful models unless they fit preconcieved notions. In other words, if enterprise architecture, 90's-style, just doesn't work, then let's look at what does.
The best enterprise architecture I've come across hasn't come out of any ivory tower group. Invariably, the best architecture I see comes naturally from self-organizing thought leaders in an organization that seek each other out and collaborate on common solutions to their problems. Rather than the us vs. them mentality of old-world enterprise architecture, there is only an us mentality. Instead of proscribed standards, designs, technologies, and tools there is real consensus and immediate buy-in. Sure, there are political camps in any organization that don't like playing with other folks but the inexorable drive towards service-orientation in the enterprise makes it so that people have to play nice together like never before. Or they can't get access to or provide the services they must in order to survive. CIOs, chief architects, and other technical officers must foster this collaboration and seek out the folks that are 1) technically competent, 2) have ground truth in what the business actually does, and 3) have great people skills. The truth is that enterprise architecture has always been much more about people and business than about technology. Making enterprise architecture happen today is about building agile networks of people in your organization that have the big picture, the local political power, real understanding of technology, and have a stake in the final outcome.
Technorati: Enterprise Architecture, Computers and Internet, Agile Methods
|
OPCFW_CODE
|
Do you have a crashed Windows computer at your disposal? Well, there are many ways to recover its data and information. But if it happens to be a rather old computer then it might get tricky to recover everything. The tool that we are talking about in this post might help your recover one part of it, and that is the Windows Registry. Windows Registry Recovery is a freeware that can help your recover crashed machine Registry configuration. What this tool does is read Registry hives and then convert them to REGEDIT4 format. Read on to find out more about this tool.
Windows Registry Recovery
Before we start, you must know what a Registry Hive is and how this tool works. According to Microsoft’s Windows Dev Center:
A hive is a logical group of keys, subkeys, and values in the Registry that has a set of supporting files containing backups of its data.
Each time a new user logs on to a computer, a new hive is created for that user with a separate file for the user profile. This is called the user profile hive. A user’s hive contains specific Registry information pertaining to the user’s application settings, desktop, environment, network connections, and printers. User profile hives are located under the HKEY_USERS key.
In short, a Registry hive is a file on disk where Windows stores your computer’s Registry as a backup. So, in case of a crashed computer, you can simply use its hard drive to extract the Registry values using Windows Registry Recovery.
Windows Registry Recovery can read files containing all Windows versions Registry hives. It extracts much useful information about Windows installation settings of the host machine.
The process of recovering the Registry is quite simple. Just connect the crashed computer’s hard drive to another computer and run this tool. Now you have to locate Registry hives in this drive and open them up with Windows Registry Recovery.
Once the file is open, you can preview all the information contained in that file. You can view some essential information about the file. Or you can view the information and configuration of the Windows installation. You can check what programs were installed and what was the hardware map when the computer has crashed. Moreover, you can even view the network configuration, Firewall Settings, Users on the host computer, and so on. The tool lets you view every corner of the hive in a very organized way.
Moving on to the export feature, you can export the entire hive to a REGEDIT4 format. REGEDIT4 is nothing but a more nuclear and modern way of saving Registry backups. It can be directly merged with Windows Registry using Windows Registry Editor. So, if you want to restore Registry configuration on your host computer, this is the way out. Or you can just save the backup for future reference.
Windows Registry Recovery is a great tool that can help a lot of people in trouble caused by crashed computers. The Registry is an integral part of Windows computers, and there is often a requirement to backup or recover it. This tool works with hives from all versions of Windows ranging from Windows 9x to Windows 10. Plus, it is free to use.
You can download Windows Registry Recovery from its homepage.
|
OPCFW_CODE
|
The purpose of Scan3d, as its name suggests, is to acquire the three dimensional shape of a real-world object. In particular Scan3D is a program that uses a set of pictures of the object, to obtain a VRML description of its surface.
This figure shows an example of what scan3d can currently do.
First of all: for me is not so simple to write in English, so if you find errors or parts which are not well explained, don't hesitate and send me a corrected copy of this document.
This project is in its early stage of development, so it is incomplete, full of bugs and so on. There are still many things to do: so, if you think that you can give your contribution (every kind of contribution), you are really welcome! You can find a list of things to do under the main directory of the project (the file is named as usual: “TODO”).
I describe what instruments you need and how you should use them.
Scan3D require only a digital camera and a rotating plane. A digital camera is not so difficult to find nowadays. The rotating plane can be simply the tool you put under your TV to easily rotate it, but you can use whatever other thing lets you rotate the object around a fixed axis.
There are many suggestion I could give to you, but I prefer to begin giving a fast description of what you should do to “scan” an object. You should place the object you want to scan over the rotating plane. Then you should put the digital camera in front of the object. Now relax and take a coffee: you should take about 70 photos to obtain good results. The procedure is as follow: take a photo, then rotate the object by a precise fixed angle; now take another photo, rotate again by the same fixed angle and continue in this way until the object gets rotated by 180 degrees. I give you more details in the section “Suggestions”. Read it carefully.
Scan3D has some limitations. First of all it can't see the convexities of the sections it reconstructs. This figure gives a visualization of what this does mean:
This figure shows how Scan3D reconstructs the sections of the object.
NOTE: the axis of rotation is orthogonal to the plane which contains the sections.
This problem will be partially solved in the future, but now can give bad results for a large number of objects. Scan3D is not for those who want perfection (and want to pay for this!).
Second problem: Scan3D assumes that effects due to prospective are negligible. This means that it is preferable to put the object to scan far from your camera and use its zoom if possible. Don't worry too much: look at the following example: it has been obtained with an economic digital camera and with a few good ideas (find them in the section “Suggestion”!).
A view of the VRML file. This object required 72 photos.
Scan3D is still in its first stage of development. Many things need to be improved.
If you think you can help me, please, write to me!
A few suggestions:
To reduce prospective-effects: put your camera far from the object and use the zoom to avoid obtaining too small images of the object;
Your digital camera should not move during the scanning process, so the best way to take photos is to connect the camera to your PC and use it, instead of touching your camera in any way.
Do not start to take photos until you verified that, during the rotation, the object doesn't go out of the view.
|
OPCFW_CODE
|
Hey Roger – it’s a fundamentally different paradigm. With Dropbox, the file lives on your local drive in addition to being sync’d into the cloud. With Google Drive, there is a file on your local drive, but the only contents of those files are essentially https URL links to remote docs.google.com files, not the actual file contents. Thus while I’m sure EF could go through complicated machinations to download the file, do its thing, index it, etc… etc…, given that it’s such a different paradigm, there’d be a lot of things that would end up breaking.
Speisert, thanks. I ended up going with Dropbox. We’ll see how well it works out.
I am looking forward to Apple’s new OS, although experience tells us it may be buggy for many months.
As far as Box, are you saying it uses the same paradigm as Dropbox?
I may not fully understand the original question and your response. I am new to EagleFiler. With that said, Google Drive does sync your local files. If they are actual Google doc, spreadsheet, presentation, type of files then like you said, they are essentially links to the contents stored on Google servers. However, any other types of file, a local copy is stored and synched with Google Drive and any other device you have connected/shared.
Like I said, I’m new to EagleFiler so I would not attempt to suggest that Google Drive is a good solution for EagleFiler (yet). I’ve found Google Drive to work very well in synching files of many types. No significant problems or complaints here. I’ve also synched somewhat complex file systems, e.g., DEVONThink “databases”, across several computers too.
I’ll be testing EagleFiler with Google Drive. I suspect that any meta data that may be lost is a hidden file. I’ve not experienced any meta data loss from document type files, e.g., pdf, doc, ods, etc., on Google Drive before.
Wow, would it be nice if Google Drive worked with Eaglefiler. I spend $100 a year on Dropbox; that’s on top of the $100+ for Office (OneDrive doesn’t work well with Eaglefiler, either–a file-name-length problem; otherwise I could just use that service).
Plus my office now uses Google Drive, so I’ve got stuff spread across three cloud services. (Dropbox for most of my “home/freelance” stuff, OneDrive for OneNote (which I live in, in my office), and Google Drive for my day-job.
I tested Google Drive again. The performance problems of the early versions seem to be fixed. It still does not preserve creation dates, file labels, or extended attributes. If you don’t mind losing those, you should be able to use it with EagleFiler.
Hmm. Creation dates are certainly useful. From the above reference to Box using the same “paradigm” as Dropbox, I take it Box and Eaglefiler work well together?
[Add/edit: I just read the manual and it sounds as though Dropbox doesn’t save creation dates either, so I guess I haven’t been using them to begin with! So $24/year for Google may be the way to go, for me. It would be $60 for Box, and I pay $100 for DropBox.]
For most modern apps, there is probably nothing essential in the extended attributes. It really depends on the apps that you use. Some apps use them to remember user state about the file, e.g. its text encoding or cursor location. The system uses extended attributes to store file tags. However, EagleFiler has its own tag storage and can restore the tags if the extended attributes are lost. The system also uses them to store extended permissions information if you are using ACLs.
Older Mac files sometimes stored important data in the resource fork, which is stripped by the services that don’t support extended attributes.
|
OPCFW_CODE
|
Event E2/Proposal Astrolabe
Please don't edit this entry unless you are the contestant (or the E2 Contest runner) until the E2 Contest is completely over.
Congratulations, this entry has been shortlisted!
|This is an entry for E2: The Event Event.|
A contest to see who can be the best teacher/mentor of YPP puzzles
|Audience||Excellent puzzlers||Elapsed time||From announcement to submission deadline|
|Unit of Entry||Individual Submissions||Participant Time||Variable, but at least a few hours per entry|
|Expected Participation||As many as possible||Judging Time||As necessary to review/evaluate entries|
|Platform||Forums / web|
N.B. Updates to this event description are now occuring on the forum version of this post, which you can find at Puzzle Tutorial Contest
Avast ye blood-handed cut-throats! Why be ye all dastardly curs, like that mutinous scalliwag Barbarossa? What of the Code of the Pirates? What of honor on the open seas?
This event is intended to test pirates' skills, for once, not by how well they can defeat other pirates, but by how much they can help them.
PLEASE HELP I would like to make the prizes in this contest as HUGE as possible, to attact many quality submissions and thus benefit the whole YPP community as much as possible. If you would like to donate to this contest, please read over the "prize" section below and contact me at email@example.com or by forum mail here to Astrolabe1. (Yeah, there's a 1 there, 'cause I created the Astrolabe identity when I was still a greenie and I have no idea what its password etc are any more, so I've been using Astrolabe1 ever since I started posting.)
The purpose of this contest is to inspire the development of the definitive guide to various YPP puzzles.
While some puzzles have good guides already, some do not -- and lots of valuable information is inconveniently scattered about in various forum posts. Some people have ideas/insights/tactics which are not posted yet. And many of the existing posts, while conceptually helpful, lack adequate visual aids in the form of images or movies.
The goal of this contest (in addition to giving nice prizes to the winners) is to create permanent resources which will be of benefit to the whole ocean -- greenies and experienced players alike.
Entrants are to create tutorial files for the puzzle(s) of their choice, starting with a presentation of the basic moves and working step by step up through the advanced tactics and strategies of ultimate puzzling.
Information should be consolidated from the YPPedia, the forums, and players' own experiences, and presented in a clear and systematic fashion.
The tutorial should be amply exemplified with images, screenshots and movies to help illustrate the moves, concepts and techniques being presented.
Entries should be in the following four categories. A pirate may enter as many guides as he or she wishes, but -- though it is not impossible that one pirate may have two winning guides -- preference will be given to awarding prizes to different pirates:
A Duty Stations: Bilging, Sailing, Carpentry, Gunning, Duty Nav
B Shop Puzzles: Ship building, Alchemy, Distilling
C Tournaments: Drinking, SFing, Treasure Drop
D Miscellanea: Leading a Pillage, Running a Shop/Stall, Organizing and Running a blockade, Others.
Entries will be judged on the following criteria:
(1) Clarity: is the tutorial clear and easy to understand?
(2) Comprehensiveness: does the tutorial cover everything about the puzzle from the basic moves to the advanced strategies?
(3) Contribution: how much does the tutorial add to the YPP world, both by consolidating existing but scattered information and by adding new material to what is already publicly available?
(4) Illustration: Quality and quantity of visual aids to illustrate the concepts and techniques presented in the tutorial.
(5) Presentation: Overall visual appeal of the guide -- layout, formatting, etc.
(6) Research: Each entry must conclude with a "links" section which gathers together all the existing resources which the pirate found on the particular puzzle.
Though existing guides may be consulted, all material presented must be the entrant's own work. Any images, video, or other support files used by the pirate, if not their own, must have the author's express permission for inclusion in this competition, sent by email from that author to me. Any entries judged to have plagiarized will be disqualified.
Details on submission are being worked out and will be posted here soon - both where to submit your tutorial(s) and the exact format of that submission.
Entries are due midnight, Monday March 13th, PST. (n.b. that's sunday night, not monday night)
GUIDELINES FOR ENTRANTS
How you chose to organize and present your tutorial is entirely up to you. Remember that your entry will be judge, in part, on its clarity and organization.
Generally speaking, each entry should consist of:
(1) An brief introduction to and description of the puzzle
(2) A brief description of gameplay.
(3) A general overview of the basic tactical approach
(4) Specific analysis of the primary tactics/situations/moves
(5) Discussion (if necessary) of more complex situations/tactics
(6) Plenty of pictures and videos to illustrate each point
(7) At least one (and preferably more) "full" puzzling sessions recorded as a video, demonstrating the basic moves/tactics/approaches, with commentary (writen or audio) as appropriate.
(8) A links section to other existing tutorials/tips/etc about your puzzle. Please make links as specific/direct as possible (i.e. linking to the proper point within a page, not to the top of it).
For your introduction, you may assume that the reader is familiar with the basic mechanics of the puzzle, so you need not go (unless you wish to) into too much detail... though you will want to at least summarize the basics of the puzzle.
Your primary focus, however, should be on presenting tactics/techniques (basic to advanced), on providing pictures & videos as illustration, and on coming up with a comprehensive "links" section to connect to other existing resources for the puzzle.
If you submit several tutorials, please make each one stand-alone and separate.
Also, to assist fair judging, PLEASE use an alt pirate so that your "real" pirate name doesn't appear in the images or movies. Likewise the url names or locations used to save these files.
In putting together your tutorials, you'll need to use various resources. The following may be of use:
To start you off, check out the YPPedia guides/info here and here and here. These pages contain links to additional forum sites -- some of which are expired / out of date. There are plenty of other forum posts (individual ones or whole threads) as well, which you should collect into your "links" section as you gather your material.
Screen Shots & Images
Macs & PCs both have built-in screen capture methods. There are plenty of shareware tools to edit images. I've found ColorIt on the Mac and ImageForge on the PC to be good... but there are lots and lots, and most of you probably have your own favorites already downloaded.
Make sure the images you produce are clear, but try to keep them as small as possible, by just clipping out the relevant bits.
Screen Video Recording
Good videos will be key to winning this competition. While images are helpful, a lot of strategy is about the sequence you do things in, and a video is often the clearest way to present that, especially for greenies.
There are a variety of freeware and shareware screen recording and movie-editing programs out there you can use to create your movie files. Here's a list of a few I've run across (though I haven't used them all):
ACA Screen Recorder
As with the image production, please try to find a good balance between file size and file clarity, and edit your movie files to be focused.
Public Host Space
For posting your image and movie files, once you've produced them on your computer, you'll need to find hosting space. If you have on-line storage space associated with your email or other computer accounts, you can of course use that. But there are also public sites which provide on-line storage space.
Some of these are direct-link sites, allowing others to go directly to your file. Others have a little "commercial" page they force the user to go through. If you can, find a direct-link site... but having to use an indirect-link site won't be held against you. Also, some sites only host the files for a limited period of time, so make sure your file will continue to be available until the end of the competition.
A panel of judges -- including both poor and expert puzzlers -- will judge the submissions on the criteria listed above. Judges names will be kept confidential, and arrangements will be made to make the submissions anonymous to the judges.
Judging will happen Mon March 13th - Fri March 17th.
A guideline/grading sheet for judges to use in evaluating the puzzles will be posted here soon.
Seven finalists will be chosen from the entries: 2 from category A, 1 from B, 1 from C, 1 from D, and 2 wildcards from any category. From these seven finalists, the top 3 will be chosen as the big winners.
Details on prizes are still being finalized. (I'm also waiting back to hear from OMs on details of financial arrangements.) Trinkets will include (I believe) ribbons for the seven finalsts, announcing you as a puzzling guru, -- and I'm hoping the OMs will agree to award a familiar to the first-place winner.
Also, I'm trying to raise POE to make the prizes for this contest as big as possible. I'll be contributing 40-50k of my own poe, if I can, but donations from you richer pirates are very welcome -- the bigger we can make the prizes, the more entries we'll get, and the better for the whole YPP community it'll be! I am hoping to get at least 500k in prize poe & goods to be awarded. Currently I'm up to about 140k in POE.
The top 3 winners will get additional awards, beyond a ribon and POE -- details also to follow, but probably in the form of ship(s) or furniture. Donations also welcome for this portion of the contest.
A volunteer panel of judges to review the subissions - ideally people who can net-conference to discuss finalists.
Prize generation; possible posting of "worthy" entries into the forums/net as a resource for the ocean? (movie files need hosting space)
Suggestions on the best way to combine ease of submission with anonymity for judging.
Previous Similar Events
none of which I am aware.
A good puzzler learner/teacher (ult in some, needing tips in others), who has already developped a draft bnav tutorial for my crew's own use. I have plenty of stat-conscious mates (both ults and aspiring), some of whom would be willing to help out with judging.
|
OPCFW_CODE
|
V.Gfiction Release that Witch – Chapter 1125 soup neat propose-p3
Novel–Release that Witch–Release that Witch
Chapter 1125 kick tan
Release that Witch
Danny endured transfixed on a lawn, watching a tendril of smoke cigarettes get away from from your muzzle in amazement.
Just right then, the coach enable out a long shrill whistle on the length!
Following your fourth shot, stunning flames suddenly erupted from the demon’s torso.
Danny reloaded the handgun and dragged the bring about once more.
“Aargh… d.a.m.n it,” Danny muttered between his coughs, feeling a soreness lance through his torso. Meanwhile, also, he tasted our blood within his oral cavity. “Malt, are… are you Okay?”
Several demons was going to assault the campsite from both eastern side and also the southern. These people were the principle compel with the foe.
Danny could hear Malt scream. He also want to consult himself the identical issue.
From Edinburgh to India & Burmah
“Exclusive Item of Tactics and Tactics”.
“They’re approaching!” Suddenly, somebody yelled. “They are really 1,500 yards from us. Everyone, keep notify!”
The demon crawled away from the overturned steel instance and hollered angrily. It finally transformed its haughty att.i.tude and hit for any enormous twice-edged sword on its backside.
Down the trenches inside the exterior diamond ring with the encampment, some troopers were definitely stressing powering s.h.i.+elds, and Fishball was one of those. Even though he was a member of the anti-aircraft machine rifle squad, he failed to think it best if you run the equipment firearms when their foe taken place to always be something more grisly than piloting Devilbeasts.
It almost cost him all his durability.
Seafood Soccer ball looked at the journey that had happened a few months ago, the place swarms of demons obtained sprinted toward them in a enormous quickness. It turned out a chilling world to behold. Luckily, the very first Army got received themselves well prepared. Their gunfire acquired ceased the demons somewhere 200 yards away from the encampment.
“I’m high-quality,” Malt replied adjacent to him anxiously, “however you are injured!”
“Work, mortal,” mentioned the person as he made about. “This may not be something it is possible to take care of. We’ll control from here.”
“Can’t they prevent those pouring down rain jewel tiny needles?”
Danny would not back away unless he passed away.
The demon fixed Danny using a amazing look and slouched toward him.
As he taken straight down a Mad Demon that attempt to start an unexpected invasion within the Particular Device of Practices and Ways from associated with, the warrior converted all around and cast him a peek from a long distance.
Release that Witch
“No, make sure you manage, as quickly as it is possible to!” Malt implored.
Down the trenches on the outer diamond ring on the encampment, some troops were definitely moaning behind s.h.i.+elds, and Fishball was one of them. Despite the fact that he was part of the anti-airplane appliance handgun squad, he did not think it a great idea to perform the equipment firearms when their enemy happened to generally be something more grisly than traveling Devilbeasts.
“No, you should manage, as quickly as it is possible to!” Malt implored.
The demon crawled right out of the overturned iron instance and hollered angrily. It finally changed its haughty att.i.tude and attained for the large two times-edged sword on its back again.
Sea food Ball looked at the expedition who had occurred some time ago, where by swarms of demons acquired sprinted toward them in a incredible performance. It had been a chilling picture to behold. Thankfully, the primary Army acquired got themselves prepared. Their gunfire had stopped the demons somewhere 200 meters outside the encampment.
“I’m excellent,” Malt responded near to him anxiously, “however you are harmed!”
Just right then, the educate simply let out a lengthy shrill whistle in the distance!
“Exclusive System of Practices and Methods”.
“Operate, mortal,” stated the guy as he converted all around. “This is simply not one thing you are able to manage. We’ll take over from here.”
Numerous members of the military armored from the very same fas.h.i.+on adhered to at his heels. As the class joined up with the fight, your situation gradually improved. In spite of their weighty stress, they shifted and walked considerably quicker than the usual frequent soldier. As they quite simply slowly cornered the opponent, their strike converted much more brutal and also savage. When they tired their ammunition, rather than utilizing bunkers, they switched to bayonets and started to stab the foe ferociously.
With the 4th photo, amazing flames out of the blue erupted in the demon’s chest muscles.
“Avoid! That’s plenty of! Why don’t you depart?”
No wonder they were picked out because of the master.
With an earsplitting crash, the demon was forwarded traveling along the area and directly into an metal situation.
Danny reloaded the firearm and drawn the induce yet again.
|
OPCFW_CODE
|
Category widget filtering/styling behavior
Context
When using a Category widget to filter features, the category style changes dynamically, along with the legend bullets. It does so by assigning the first color in the ramp to the first category in the current selection, second color in the ramp to second selected category, etc
However, this leads to a situation where the map categories change drastically from what was designed in the first place (through STYLE tab), and for example red can become green or vice-versa, making the map difficult to read
Having the style adapt dynamically to widget filtering is great, but maybe it makes more sense for histogram widget/choropleth visualizations than for categories.
Steps to Reproduce
Create a map and assign a category style
Add a category widget for the same column
Filter data
Current Result
Map features and legend bullets change dynamically to the current filtering, overriding the category style
Expected result
In category widgets, respect the color assigned to each category
Additional info
I know this isn't an easy topic, as it has been discussed before and there are a lot of things to take into account, but maybe it's worth revisiting
Assigning to this sprint just in case we tackle the auto-style fixes
It is not related with auto-style things.
It should right?
On Wed, Nov 23, 2016, 8:32 PM Javier Álvarez Medina <
<EMAIL_ADDRESS>wrote:
It is not related with auto-style things.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/CartoDB/cartodb/issues/10805#issuecomment-262610237,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAIENaA5UPcND9j5AOjABKW4FEZEyyzMks5rBJS6gaJpZM4K64f2
.
Feel free to change the tags or reassign/whatever
On Wed, Nov 23, 2016, 10:51 PM Sergio Alvarez Leiva<EMAIL_ADDRESS>wrote:
It should right?
On Wed, Nov 23, 2016, 8:32 PM Javier Álvarez Medina <
<EMAIL_ADDRESS>wrote:
It is not related with auto-style things.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/CartoDB/cartodb/issues/10805#issuecomment-262610237,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAIENaA5UPcND9j5AOjABKW4FEZEyyzMks5rBJS6gaJpZM4K64f2
.
So what we want is to have the old editor behavior when creating a category map, don't we?
yep yep
Does that mean we want to have static legends for categories?
From my understanding, if we have a way to say windshaft render this color for this category, the legend would work, right?
But once your data changes you are trapped to the fixed colors you decided in the first place. You don't want that.
All countries:
https://team.carto.com/u/rochoa/builder/17bb226c-b592-11e6-92b3-0ecd1babdde5/embed
vs
All countries but top 3:
https://team.carto.com/u/rochoa/builder/17bb226c-b592-11e6-92b3-0ecd1babdde5/embed?state={"map"%3A{"ne"%3A[-87.94096618407002%2C-196.171875]%2C"sw"%3A[87.9535588895499%2C196.171875]}%2C"widgets"%3A{"80d35c35-8b28-4fbc-ac2e-36f483c2fa61"%3A{"acceptedCategories"%3A["Brazil"%2C"Canada"]}}}
Now, imagine I filter out the top 10 countries but I fixed the first 10 colors, what would you get for the new top 10 countries (11th-20th countries from whole dataset)? If you don't assign colors dynamically based on the filtering you won't be able to assign any color to those countries.
Probably, we want to provide an option to make colors fixed/hardcoded when you don't want them to change based on filters, but I'm not very inclined.
cc @nobuti
PR: https://github.com/CartoDB/cartodb/issues/10972.
Waiting a window to deploy to staging.
Closed! thanks @nobuti.
|
GITHUB_ARCHIVE
|
Such blackjack possibilities, maps and methods is going to be analyzed here and online. Yes, it is possible to victory real cash once you gamble on the internet blackjack. The odds in the on line blackjack online game have become the same as those individuals within the alive dealer online game. Because the family have a plus, it is very limited, definition there’ll be a reasonable possibility to win each time you gamble. The most important thing you could do to find better in the the game should be to gamble blackjack if you possibly could! Before moving to your real money blackjack, whether or not, you need to hone your talent which have free online blackjack games such as the one above.
- Yet not, there’s little like to try out real money blackjack.
- After you enjoy on the web, every one of these procedures have a loyal option, it’s crystal-clear what your options are through the for each and every give.
- Black-jack the most common table games in the All of us.
- In the event the doubling otherwise splitting is actually statistically a proper play, however don’t possess sufficient chips, the overall game will offer the best way forward for what you can manage to do.
Degree themselves first is important in order to on-line casino people before they’re able to plan to gamble blackjack for real money from the exact same gambling establishment. Even if black-jack is actually a-game from chance, approach plays a button role. Speaking of issues black-jack people always query on their own. Because the video game is influenced by strategic alternatives, to play totally free models can really sharpen your own intuition and educate you on when to take particular steps.
Free monopoly casino games online: Play On the internet Black-jack Now! The real deal Money Or Totally free
This should help free monopoly casino games online you prevent mistakes when you begin to try out to possess real money, and you can pertains to some other table video game we should are aside. Certain black-jack gambling enterprises to your wager fun function don’t require players to join up a merchant account with these people playing blackjack for free. Players could only be a part of the newest black-jack enjoyable instead investing the web gambling establishment having a user membership.
What is actually Black-jack?
You’ve got a choice to increase the amount of black-jack cards by choosing ‘hit’, nevertheless remove automatically if the property value notes exceeds 21. You could gamble live blackjack here 100% free, no-deposit needed. BlackjackSimulator.net cannot desire for the information on the site so you can be studied for illegal objectives. It is your decision to make sure you are out of legal many years which online gambling is courtroom on your country out of house.
Sometimes, you might have to install a software playing blackjack online. Look at the popular casino’s webpages for much more facts. Generally, apple’s ios software are available in Fruit’s Application Store when you’re Android users need down load programs in person regarding the user’s web site. However, just as in other opportunities, the newest betting arena has some internet sites you to definitely aren’t above-board. For many who’re also searching for an established on line black-jack site, BlackjackOnline is the best place first off.
As there’s zero real cash inside it, the fifty claims allow it to be free online black-jack. Single-player blackjack is very best for novices because allows you to play as much as around three give at once. You can buy a feeling of the newest cadence of your own games and you will discover quickly as the all of your hands plays in another way. While it’s not always easy to find online websites which have black-jack with real cash prizes, 100 % free black-jack can be found anyplace which have a web connection. Of numerous affiliate sites provide anyone who will pay by far the most, but the an excellent ones is actually fussy from the who they enhance and you may usually intervene in the unrealistic enjoy away from a player dispute.
They’re also available directly from your internet web browser, which means you wear’t also need to visit a gambling establishment web site to play. States features differing quantities of regulation to have on the web blackjack that requires any a real income. Sweepstakes gambling enterprises, for which you buy credits playing that have, try court in all claims except Washington. The player to experience to your dealer’s immediate kept is alleged as at first feet. So it player get their cards first and you may, for many who’re not to play in the a tournament, they’ll be the first ever to operate regarding the online game.
Totally free Black-jack Games Provides
The minimum and you will restrict wagers on the desk are exhibited to your the new minute/max part of the dining table. Use the advice you may have from the thinking you’ve assigned to their cards, their powering count, and correct count to determine precisely what the genuine number is. However, there are a few sets which you shouldn’t split up, as they wear’t leave you a great risk of profitable.
The online gambling webpages in the sunshine also provides a few black-jack games, however, we understand one to either we would like to play more than simply a las vegas-layout online game out of 21. I speed sites to your kind of blackjack alternatives given, from common differences for the typical rulesets to help you option games such as Foreign language 21 and you will Black-jack Button. We in addition to make sure the online game variations being offered come from preferred application company such NetEnt, Evolution Gambling, and Playtech. We understand out of sense you to definitely Canadian professionals including best black-jack laws. We require Canadians in order to have fun with our on the web black-jack online game in order to habit method and possess always playing without having to worry an excessive amount of on the dropping real money.
|
OPCFW_CODE
|
#include <opwig/md/class.h>
#include <opwig/md/function.h>
#include <opwig/md/type.h>
namespace opwig {
namespace md {
bool Class::AddNestedFunction (Ptr<Function> nested) {
if (nested->name() == name_ && !nested->return_type()) {
constructors_.push_back(nested);
nested->set_parent( shared_from_this() );
return true;
}
else if (nested->name() == ("~"+name_) && !nested->return_type()) {
if (static_cast<bool>(destructor_))
throw SemanticError("Classes may have only one destructor!", __FILE__, __LINE__);
destructor_ = nested;
nested->set_parent( shared_from_this() );
return true;
}
return Scope::AddNestedFunction(nested);
}
bool Class::AddNestedClass(Ptr<Class> nested) {
if (nested->name() == name_) {
throw SemanticError("A nested class can't have the same name as the parent class.");
}
return Scope::AddNestedClass(nested);
}
}
}
|
STACK_EDU
|
Facial recognition is an important topic that affects many facets of modern society. Over the past several years it has influenced the trajectory of policing, government, law, and the broader tech industry in the United States and around the world.
[Related article: Microsoft, IBM, and Amazon Ban Police from Using Facial Technology]
The recent protests against racially biased policing in the US, in particular, have shined a spotlight on the role of facial recognition in policing, causing several prominent companies to revise their policies on the development and use of this tech. Both Microsoft and Amazon have pledged not to sell their facial recognition technology to police departments until there are regulations in place and for a year, respectively. IBM, meanwhile, announced that it would be shutting down further development of its facial recognition technology. Additionally, two major US cities, San Francisco and Boston, have banned the use of this technology.
One of the major issues with the currently available technology is that it is less able to accurately identify non-white, non-male faces. Unfortunately, some of the recent efforts to address this issue have illustrated how difficult it is to completely remove bias from the technology.
Recently, a group of researchers tried to make facial recognition technology more inclusive by creating two databases: one that is “racially balanced” and comprises a segment of the LGBTQ community and one that is “gender-inclusive.”
Although the researchers’ intentions may have been altruistic, the manner in which they classified gender for this dataset is itself informed by underlying biases of how male, female, and non-binary faces should look. These datasets once again illustrate how difficult it is to remove bias completely from facial recognition technology.
Even with the controversy surrounding facial recognition, globally there is hope that the technology can help solve problems in many industries, such as healthcare. In countries where identifying patients can be very difficult, for example, facial recognition technology can be used to prevent the misidentification of patients and ensure that the doctors have the correct medical records.
Additionally, the data science community, as well as society in general, has taken several promising steps in addressing and rectifying the current issues plaguing the technology. Gabriel Bianconi, founder of Scalar Research, has identified three areas of progress in particular:
- Reducing Bias: Facial recognition (much like many other areas of ML) has historically suffered from unintentional biases (e.g. racial, gender) arising from the data they’re trained on. The ML community has taken note of this issue and is actively developing solutions (e.g. better data, better algorithms) to tackle it.
- Privacy: There has been work towards developing methods focused on the privacy of the user; for example, federated learning allows models to learn and make predictions without needing sensitive data to leave the device.
- Regulation for Surveillance: Congress is exploring regulating facial recognition use by government agencies. Again, like many technologies, [facial recognition] can be abused/misused or be positive, so proper regulation regarding government use of [facial recognition] would hopefully lead to better, positive adoption.
If you are interested in learning more about the ethics of AI technology and how to mitigate bias in machine learning models, ODSC Europe is hosting several sessions on the topics, including
- Ethical Issues for Data Science, Machine Learning and Artificial Intelligence│Brendan Tierney│Architect│Oralytics
- Explain Machine Learning Models│Margriet Groenendijk, PhD│Data & AI Developer Advocate│IBM
- Ensuring Ethical Practice in AI│Sray Agarwal│Manager Data Science│Publicis Sapient
- Removing Unfair Bias in Machine Learning | Margriet Groenendijk, PhD│Data & AI Developer Advocate│IBM
For more information on ODSC Europe and featured talks and speakers, check out the website here.
|
OPCFW_CODE
|
using System;
using System.Threading.Tasks;
using Elders.Cronus.Workflow;
namespace Elders.Cronus.MessageProcessing
{
/// <summary>
/// A work-flow which gives you the ability to call Handle on an instance object for a message. A 'Workflow<HandleContext, IHandlerInstance>' should be passed which
/// would be used for instantiating a new instance of the desired object which would handle the message.
/// </summary>
public sealed class MessageHandleWorkflow : Workflow<HandleContext>
{
public MessageHandleWorkflow() : this(DefaultHandlerFactory.FactoryWrokflow) { }
public MessageHandleWorkflow(Workflow<HandleContext, IHandlerInstance> createHandler)
{
CreateHandler = createHandler;
BeginHandle = WorkflowExtensions.Lamda<HandlerContext>();
ActualHandle = WorkflowExtensions.Lamda<HandlerContext>().Use((context) => new DynamicMessageHandle().RunAsync(context.Context));
EndHandle = WorkflowExtensions.Lamda<HandlerContext>();
Error = WorkflowExtensions.Lamda<ErrorContext>();
Finalize = WorkflowExtensions.Lamda<HandleContext>();
}
public Workflow<HandleContext, IHandlerInstance> CreateHandler { get; private set; }
/// <summary>
/// Work-flow which would be executed at the beginning of the work-flow.
/// By default there is no work-flow set. If you want you can call 'Override' to attach different message handler.
/// </summary>
public Workflow<HandlerContext> BeginHandle { get; private set; }
/// <summary>
/// Work-flow which would be executed after the 'ActualHandle' and 'BeginHandle' are executed.
/// By default there is no work-flow set. If you want you can call 'Override' to attach different message handler.
/// </summary>
public Workflow<HandlerContext> EndHandle { get; private set; }
/// <summary>
/// Work-flow which would be executed on an exception of raised by 'BeginHandle', 'ActualHandle' or 'EndHandle' work-flows.
/// By default there is no work-flow set. If you want you can call 'Override' to attach different message handler.
/// </summary>
public Workflow<ErrorContext> Error { get; private set; }
/// <summary>
/// Work-flow which would be executed after the run has finished even if there is an error on it.
/// By default there is no work-flow set.
/// </summary>
public Workflow<HandleContext> Finalize { get; private set; }
/// <summary>
/// Work-flow which would actually call handle on the target instance.
/// The default work-flow used is 'DynamicMessageHandle'. If you want you can call 'Override' to attach different actual message handler
/// </summary>
public Workflow<HandlerContext> ActualHandle { get; private set; }
public void OnHandle(Func<Workflow<HandlerContext>, Workflow<HandlerContext>> handle)
{
ActualHandle = handle(ActualHandle);
}
protected async override Task RunAsync(Execution<HandleContext> execution)
{
try
{
using (IHandlerInstance handler = await CreateHandler.RunAsync(execution.Context).ConfigureAwait(false))
{
var handleContext = new HandlerContext(execution.Context.Message.Payload, handler.Current, execution.Context.Message);
await BeginHandle.RunAsync(handleContext).ConfigureAwait(false);
await ActualHandle.RunAsync(handleContext).ConfigureAwait(false);
await EndHandle.RunAsync(handleContext).ConfigureAwait(false);
}
}
catch (Exception ex)
{
var context = new ErrorContext(ex, execution.Context.Message, execution.Context.HandlerType);
context.AssignPropertySafely<IWorkflowContextWithServiceProvider>(prop => prop.ServiceProvider = execution.Context.ServiceProvider);
await Error.RunAsync(context).ConfigureAwait(false);
throw context.ToException();
}
finally
{
await Finalize.RunAsync(execution.Context).ConfigureAwait(false);
}
}
}
}
|
STACK_EDU
|
Things To Do Before Reading
- Make specific times to read assignments for each course. Mentally commit yourself to these time periods to read about these subjects. This makes concentration easier.
- Recall what you already know about the topic to be read.
- Bring an open mind to what you read. You don’t have to agree in order to understand what an author says.
- Intentionally state a reason to read (e.g. “I want to find out about …”) or create questions out of titles, subheadings, italicized words, etc. and read to find the answers. Concentration and memory improve when there is a specific purpose for reading beyond the fact that something has been assigned.
- Divide a long chapter or assignment into pieces. It is easier to concentrate if you focus on one piece at a time instead of trying to digest a large amount of material at once.
- Take one or two minutes to skim through a chapter before reading to see how it is structured and where the author is going to take you. Look at the title, introduction, subheadings, and summary.
Things To Do While Reading
- Read only when you are able to concentrate. Monitor yourself by putting a check mark on a piece of paper whenever concentration wanders. This will help return your mind to the reading assignment. If you cannot concentrate, do something else for five or ten minutes, or study a different subject for a while.
- As you read, take notes from the text. Condense ideas using abbreviations, symbols, short phrases, and sketches. Avoid complete sentences.
- Use a specific format for organizing notes from textbooks. The Cornell System for organizing notes involves drawing a line 1/3 from the left margin of a notebook paper. Main ideas are recorded on the left side and details recorded on the right side.
- Another convenient note format is to make a question from a main idea and place it on one side of a notecard. Read to answer the question and put the answer on the other side. This reduces forgetting what was just read and provides a fast and easy way to organize note for later learning.
- When you make notes, use your own words to record ideas. This will aid in learning and in later recall on tests.
- Change reading speed according to the difficulty of the material and the purpose for reading. No single reading speed is effective for all types of reading material. Textbook reading should be done fairly slowly and deliberately compared to reading newspaper articles or novels. If you take good notes, you should not have to read a textbook chapter more than once.
- Read and study in locations free of visual and auditory distractions.
- When concentration or understanding what is read is a problem in textbooks, read aloud as if explaining it to someone else.
Things To Do After Reading
- In your spare time, think about what you read. Discuss information to be learned with others such as in a study group.
- Relate what you read to class lectures.
- Look at main ideas or questions and recite aloud or write details and answers without looking, as if you are taking a test. If you can recall answers complete and accurately from memory, you know that you know the material. If you cannot, you know immediately where you need to concentrate your study efforts.
What you do before and after reading is as important as what you do during reading when learning from textbooks. The ultimate objective of all textbook reading should be to understand what is read and assimilate it into your store of knowledge. That is, the information has become a personal possession. When this happens, the information has been learned.
University of Central Florida
|
OPCFW_CODE
|
var lista = [1,2,3,4]
console.log(lista.length);
$(document).ready(function(){
if($(window).width() >= 1200){
/* <div class="col-1-5 producto center">
<img class="imagenOfertas" src="zapato.jpg">
<p class="nombreProducto center">Nombre</p>
<div class="nombreProducto2 row">
<div class="col-1-2">
<p class="pInline">$0.00</p>
</div>
<div class="col-1-2 center">
<img class="heartProductoLG" src="heart.png">
</div>
</div>
</div>*/
var producto = document.createElement("DIV");
producto.className += "col-1-5";
producto.className += "producto";
producto.className += "center";
var imagenProducto = document.createElement("img");
imagenProducto.className += "imagenOfertas";
var nombreProducto = document.createElement("p");
nombreProducto.className += "nombreProducto";
nombreProducto.className += "center";
var nombreProducto2 = document.createElement("div");
nombreProducto2.className += "nombreProducto2";
nombreProducto2.className += "row";
var divNombreProducto2 = document.createElement("div");
divNombreProducto2.className += "col-1-2";
var precio = document.createElement("p");
precio.className += "pInline";
var divNombreProducto2center = document.createElement("div");
divNombreProducto2center.className += "col-1-2";
divNombreProducto2center.className += "center";
var heart = document.createElement("img");
heart.className += "heartProductoLG";
}
});
|
STACK_EDU
|
We would like to use Eramba Automated Account Review for our User access review process. Initially I thought how we do review is different and ideally its not done like this but after discussion with over dozen organization I came to conclusion that majority do it like this.
- User accounts with their respective roles (each permission/access type per line) are collected from all applications (around 20)
- The list is consolidated into a single CSV file.
- We add other details to the list, such has the user’s manager name and manager email, and the application owner.
- We use an in-house built application whereby we upload the consolidated list, and the application sends out emails to the respective managers with a list of users who report to them.
- Feedback is taken from the managers and action is taken.
- Account reviews are done by the manager of the staff/employee as its impossible for the system owner to know all the staff or people who should/should not have access to their system (especially in a large environment)
- Getting the user accounts and preparing the feed files is not a problem.
- Eramba does not need to worry about getting a manager or getting files etc
I would like to use Eramba Automated account review.
I want to use Eramba, but I have the following issues; requesting that anyone can help me please
- Currently, in Eramba, the account reviewer does the review, which is entered manually for each review type. this will not work for our case as each account type will have multiple user reviews and each user needs to be reviewed by different managers.
Two Possible Options
Creating each account review type for different managers so they will be the reviewers. This is ideal as I can easily create different feed files for respective managers.
- Problem is creating the account review manually will be difficult as we have over 50 different managers, so if there was an API to create an account review would be useful, which I believe currently isn’t.
I can work with the current Eramba setup, but what is the possibility of having the reviewer of each user in the feed file? for instance like
Username, Account Roles/permission, Reviewer
Thus what will happen for each review type? The pull will get the feed, and each review type will have multiple reviews (managers of respective users that are in that review). So when managers log in to the portal, they only get to review those reporting directly under them.
Looking forward to some constructive discussion and logical solution to this that works for all and not just certain groups of individuals.
1- Your post subject must start with: “Question”, “Bug” or “Feature”. For example: Question - Internal Control Testing Dates
2- Make sure you review the documentation and FAQ before making any question, do not get offended if you are redirected by someone to read the manual!
3- If you are posting a bug or a software malfunction, please follow our Bug reporting instructions: FAQs | Eramba learning portal
4- If you are reporting a feature, you understand you have no copyright claim on it. If you make it public eramba or anyone else can take your idea, become a billionaire and share it with you nothing.
5- Keep it in English, Polite and Humorous. Sarcasm is also very much accepted.
If you can not follow these rules your post is likely to be ignored or deleted!
|
OPCFW_CODE
|
22.7r1: Dashboard breaks with PPPoE interfaces that are down
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
[x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Describe the bug
After upgrading to 22.7-RC1, the Dashboard page no longer loads on my OPNsense appliance. The first time you load it, the Dashboard pane is blank, subsequent attempts show the "a problem was detected" message with a link to the Crash Reporter.
Last working on 22.1.10
To Reproduce
Steps to reproduce the behavior:
Configure a PPPoE WAN interface, which is enabled but does not come up (I am currently on a temporary Internet connection which is not PPPoE but I have retained the config)
Upgrade to 22.7r1
Load the Lobby/Dashboard page
See error
Expected behavior
Dashboard is correctly displayed, as it was with 22.1.10
Describe alternatives you considered
n/a
Screenshots
n/a
Relevant log files
From the Crash Reporter:
[13-Jul-2022 22:16:48 Europe/London] PHP Fatal error: Uncaught TypeError: Unsupported operand types: string / int in /usr/local/etc/inc/interfaces.inc:4054
Stack trace:
#0 /usr/local/etc/inc/interfaces.inc(3999): convert_seconds_to_hms('')
#1 /usr/local/www/widgets/api/plugins/interfaces.inc(34): get_interfaces_info()
#2 /usr/local/www/widgets/api/get.php(70): interfaces_api()
#3 {main}
thrown in /usr/local/etc/inc/interfaces.inc on line 4054
Additional context
It seems like my box is going through this series of events
Dashboard is loaded
interfaces.inc is finding the /var/run/pppoe_wan.pid file, meaning it hits the code on line 3998-3999
L3998 executes /usr/local/opnsense/scripts/interfaces/ppp-uptime.sh
this looks for /tmp/${1}_uptime to compare the current system Unix timestamp, and the mtime Unix timestamp of that file
this file does not exist, presumably because the interface is currently down, so this command returns no output
L3999 then tries to convert the number of seconds to hh:mm:ss format but there is no value in seconds because ppp-uptime.sh returned no output
this causes convert_seconds_to_hms() to bail
Anecdotally, the PHP 8 migration notes give several circumstances where TypeErrors may be thrown which would not have been thrown in earlier versions, so this could potentially be a PHP migration issue?
potential workaround
I've changed my copy of /usr/local/opnsense/scripts/interfaces/ppp-uptime.sh to echo 0 if the file it's looking for does not exist so that there is always a numeric output to convert, and this has solved the problem for me in a hacky way...
if [ -f /tmp/${1}_uptime ]; then
echo $((`date -j +%s` - `/usr/bin/stat -f %m /tmp/${1}_uptime`))
else
echo 0
fi
Environment
Software version used and hardware type if relevant, e.g.:
OPNsense 22.7r1 (amd64, OpenSSL).
Intel Atom C2558
Network Intel® I210-AT, I350
@g-a-c thanks, looks like PHP 8 is a bit allergic to this now. Will fix in a bit.
Historically it looks like empty value is expected for defunct connection:
https://github.com/opnsense/core/blob/ddb4af9040d430889c5b365cbce2c7f93eada872/src/opnsense/scripts/interfaces/ppp-linkdown.sh#L48
So I'll not touch ppp-uptime.sh and check its return value for empty instead.
@g-a-c can you try fb892d24 ?
# opnsense-patch fb892d24
@g-a-c can you try fb892d2 ?
# opnsense-patch fb892d24
Yep, I took out my quick hack and replaced it with this successfully (after a quick restart of the web GUI). Thanks for the speedy fix!
@g-a-c terrific, thanks for the quick heads-up. I'll be issuing a small hotfix today for the issues reported since yesterday
|
GITHUB_ARCHIVE
|
Run update21, if you haven't already, to create the cs21/labs/10 directory. Then cd into your cs21/labs/10 directory and create the python programs for lab 10 in this directory.
This week we will write a few smaller programs, all using recursion.
Some of the programs will use the Zelle graphics library again.
Write a program to create a silly text effect in the terminal window. Your program should (in main()) ask the user for some text, then ask for a character. After getting the user's input, call a recursive function that creates and returns a string, where each letter of the original text is followed by a certain number of the input character. As shown below, the number of characters between each letter decreases with each letter.
Your recursive function should have just 2 parameters: the text string and the input character that goes between the letters. The function should return the final string to be printed in main().
$ python sillytext.py string: Swarthmore pattern: . S.........w........a.......r......t.....h....m...o..r.e $ python sillytext.py string: 12345 pattern: * 1****2***3**4*5
Write a program called ruler.py that displays simple ruler tick marks to the terminal window. The number of tick marks printed on each line follows a pattern as shown below. Your program should have a main function that asks the user for n, then calls a recursive function to print the dashes.
$ python ruler.py n: 2 - -- - $ python ruler.py n: 3 - -- - --- - -- - $ python ruler.py n: 4 - -- - --- - -- - ---- - -- - --- - -- -
Hint: notice the n=3 case is made up of the n=2 case,
followed by 3 dashes, and then the n=2 case again. Same for the
n=4 case (just n=3, 4 dashes, then n=3 again).
Write a graphics program that uses recursion to draw a picture.
First write a graphics function to draw an object, like a star, a flower, a smiley face, or anything you choose. However, your function must work with the following parameters:
Now write a full program called recursive-graphics.py that
draws a picture similar to those below. Your main function should set up
the graphics window and then call a recursive function to draw the objects.
Your recursive function, when it is ready to draw one of the objects, should
call your original drawStar() function (or drawFlower(), etc).
Write a graphics program called panton.py that uses
recursion to create images similar to these
Verner Panton textile patterns.
Here are some possible images created by our program:
For this program, you should again have a main function that sets up the graphics window and then calls a recursive function to draw the squares and circles. Your recursive function should have 4 parameters: the GraphWin object, the upper left and lower right Point objects, and a size. The recursive function should include some randomness, to decide if it should draw the given square (using the upper left and lower right points) and circle (using the size as the radius), or subdivide the given square into 4 equal squares and recur (assuming the current size is large enough to allow subdividing). You should also use random library functions to decide what to draw: black squares with white circles, or white squares with black circles.
p1 = Point(100,200) p2 = p1.clone() p2.move(100,0)
$ python graphicsruler.py n: 5
Hint: the color of the square is tied to the X axis position, so a square at X=0 would have a hue value of 0, and a square at X=width would have a hue value of 1.0.
You can convert from HSV to RGB values using the following:
import colorsys hue = 0.5 # could be any number from 0 to 1 r,g,b = colorsys.hsv_to_rgb(hue, 1.0, 1.0) red = int(r*255) green = int(g*255) blue = int(b*255) color = color_rgb(red, green, blue) # then use color variable as fill for square
Once you are satisfied with your program, hand it in by typing handin21 in a terminal window.
|
OPCFW_CODE
|
Part One (Part Two coming in April)
You may be aware of the spatial data types and analysis in Oracle Database…but did you know about the many development tools for creating maps to visualize your data? Do you have customer, sales territory, asset information in your database that you and your end users want to visualize and interact with on a map? Whether you’re a seasoned database user or developer, or are just starting to look at using Oracle’s spatial features, you have several choices of tools to create the perfect map to fit your needs.
This blog discusses Oracle options for visualizing your spatial data in Oracle Database. Visualization means you see your geospatial data displayed on a map. There are different options you can use to visualize your data, such as Oracle products, tools and APIs. In addition, you can use open source tools and APIs. Although all of these options are different, the basic architecture and data flow is the same for each one.
Oracle Visualization Options
Oracle provides products, tools, and APIs for visualizing spatial data stored in Oracle Database. These include Map Builder, Spatial Map Visualization Component (SMVC), Oracle Analytics Cloud, Spatial Studio, and map visualization APIs. (Note that the Spatial Map Visualization Component was formerly known as MapViewer.)
Map Tools and Products
Map Builder is a standalone Java-based desktop application. The latest version is 20c and supports Oracle Autonomous Database wallet connections. It has a simple UI for you to pan, zoom, or identify features on a map. It utilizes public Oracle spatial Java libraries such as SDOAPI which can transform database geometries or GeoRaster objects into corresponding objects in Java.
It is a useful tool for quickly visualizing any kind of geospatial data stored in your Oracle spatial database tables. You can run simple and adhoc SQL queries (via JDBC) and visualize the spatial data result sets inside Map Builder.
It is typically used as a companion tool to the Spatial Map Visualization Component, which will be discussed next.
Spatial Map Visualization Component
SMVC is a Java middleware component that is deployed and run inside your JavaEE containers, including WebLogic Server and Tomcat. It is used by many customers and is integrated into a number of Oracle products, such as Oracle Analytics Cloud.
You can think of SMVC as a single, large server that is running in your WebLogic Server instance. It provides enterprise level mapping and data services. It can be used as a raster or image tile server and can generate or stream a high volume of vector data on the fly. SMVC can also generate GeoJSON data out of your tables or queries on the fly. You can use SMVVC to publish your geospatial data using the OGC standard Web Map Service (WMS) and OGC Web Map Tile Service (WMTS). Depending on your application needs, you may be using one or more of these services.
Raster/Image Tile Server
GeoJSON Data Server
GeoJSON Data Server generates GeoJSON on the fly. GeoJSON is the spatial flavor of JSON, a standard, lightweight data interchange format popular for web applications. It supports simplifying geometries while generating GeoJSON. GeoJSON data can be displayed using Oracle Maps JS API or any open source mapping API (Leaflet, OpenLayer, etc.).
Vector Tile Server
Vector Tile Server generates vector tiles on the fly. Each tile includes a single SMVC-defined layer/theme only. Vector tiles are a format for encoding map and business data in a single file. This is a very compact binary format that enables large volumes of geometry data to be transmitted to the client. Web map applications using vector tiles are versatile (since attributes are included along with the geometries, which users can view with tooltips) and highly performant. Vector tiles can be displayed using open source mapping APIs (OpenLayer, Mapbox GL JS API, etc.).
Map Visualization APIs
In addition to the Spatial ready-to-use tools, Oracle provides map visualization APIs. If you are a Java developer and you want to develop your Java-based applications or services, then you probably want to start with these Java Packages. They allow you to transform your native spatial data types stored in the database - geometries, GeoRaster, network model, and topology – and bring them over to Java where you can manipulate and process that data.
These Java packages aren’t specifically for visualization. They are used to process your business and geospatial objects or just display them. Visualization is typically done using standard Java2D graphics API.
The V2 API has some advanced geometry editing functions including splitting and merging polygons.
Map Visualization downloads: https://www.oracle.com/database/technologies/spatial-studio/spatial-graph-map-vis-downloads.html
Map Visualization with Oracle Spatial and Graph – presentation from Analytics and Data Summit 2020
Visualizing Spatial Data (Part Two) will include easy to use, low code options such as Oracle Analytics Cloud, Spatial Studio, and open source tools and APIs. Stay tuned.
|
OPCFW_CODE
|
Mobile Apps & Curriculum for Your Class
Description: Learn how to create a free mobile application for your class for iOS (iPad, iPhone and iPod Touch) as well as Android smartphones/tablets. In this session we will learn how to create both web apps (which run via a web browser and do not require Apple Store / Android Marketplace approval) as well as basic, native mobile apps. Several different, free mobile app development options will be explored and demonstrated. No prior programming experience is required! Become an educational leader in the mobile learning revolution by offering your students opportunities to access your course content via a customized, mobile application YOU create and update!
These materials by Wesley A. Fryer are licensed under a Creative Commons Attribution 3.0 Unported License.
Contact details for Wesley are available.
An audio podcast recording of this session on February 15, 2013, in Yukon, Oklahoma is available.
An audio podcast recording of this session from July 24, 2012, in Trophy Club, Texas (Northwest ISD's 2012 TechnoPalooza) is available.
Curriculum Delivery Options
Jarrod Robinson (@mrrobbo) is THE guru of educational app development! See his March 2012 post: "Make Your Own Apps In Minutes" His apps are on thepegeekapps.com.
Curriculum Distribution Questions to Discuss:
- Mobile Site versus App or Ebook?
- Openly available or locked up / walled garden?
- Do you want/need a 'true app' or just a web-app? "A future with fewer mobile apps?" (CNN 27 July 2012)
- Mobile Friendly Google Sites:
- iPad Media Camp Curriculum
- Scratch Camp Curriculum
- Pre-AP Biology (Misty Williams, Yukon PS, Oklahoma)
- Howto: Optimize a Google Site for Mobile Accessibility and Metrics
- Mobile Friendly Blog
- Posterous (mobilizes automatically) - Example: playingwithmedia.com
- Wordpress (requires special plugin like WPtouch) - Example: iPadMediaCamp.com
- Professionally Mobilize Your WordPress Blog with PluginBuddy Mobile
- Other Examples:
- High School Chemistry (Jim Askew, Crescent PS, Oklahoma) - not really designed 'mobile friendly' but an incredible example of openly shared high school digital curriculum
- Mobile iPad News Example: Evening Edition (via Jon Mitchell's @readwriteweb article)
Learning Management Systems
- List of Learning Management Systems (English WikiPedia)
- Example: Digital Storytelling for Tribal Cultural Institutions (iBooks & PDF versions)
- See "Creating Multimedia eBooks" session resources
- iTunesU (Apple Site)
- iTunesU for iPad (free)
iReading (web app built with AppShed - Android app built with The App Builder)
Built with The App Builder - ports to multiple platforms (@theappbuilder)
Classroom Web App of Dorie Glynn, 3rd Grade Bilingual Teacher in Cypress-Fairbanks ISD, Texas (iOS & Android)
- Free web app link you can use / distribute immediately
- Relatively simple / straightforward web-based interface
- Simple: Can't create screeens with sub-screens / sub-menus (everything in your app is at one level in the menu at the bottom)
- Costs $500 if you want to actually download & submit app to iTunes / Android Market (plus other costs)
- Hyperlinks on webpages open externally (in Safari browser on iOS)
- Embedded Google Forms work poorly / don't work
- Direct links to individual YouTube videos not possible (can only link to a YouTube user channel)
- WPtouch pages on Wordpress blogs take over the screen and don't permit 'back' navigation
Another option: BuzzTouch - ports to multiple platforms (@buzzTouchApp)
Learn about how to use BuzzTouch on BuzzTouch U
$500 Udemy course on BuzzTouch
- CAN create screens with sub-menus
- No extra charges (beyond Apple's / Android Market's) to distribute a native app
- No web-app option yet, so you can't immediately distribute the app (must process with X-code for iOS and distribute through iTunes App Store)
- More complicated to learn and use than "The App Builder"
Another option: Get your school a mobile app using School Connect (free - ad supported)
Other Development Options:
- Red Foundry
- http://www.appsbar.com (free apps only, you can't submit them directly to the store)
- Trigger.io (monthly subscriptions required)
- PhoneGap (free / open source)
Helpful Graphic / Photo Editing Software
- Mac: SeaShore or Gimp on OS X
- Windows: Gimp
Educational Game App Builders
- GameSalad - ports to (video tutorials on GameSalad Cookbook)
- Stencyl - ports to iOS and flash-based web games
- Recommended podcast: "Why Every Teacher Should Become an App Creator" by Chris Thompson
- Mastering iPhone programming - Lite: Quick introduction to iPhone programming (free Udemy course)
|
OPCFW_CODE
|
Why did the USA invade Okinawa instead of one of the many other islands in southern Japan
There are many other islands in southern Japan that seem large enough for B29 runways (the current runway is 2 miles). Why didn't the USA pick the island of least resistance? I'd assume Okinawa was the most fortified due to its size and local population.
The Allies weren't taking Okinawa for B-29 runways. They had those already in the Mariana Islands. A B-29 airfield was built on Okinawa, but the first attack from it against Japan happened on the last night of the war.
The primary reason for taking Okinawa was as a base for the invasion of Japan, both for ships and shorter-ranged aircraft. Okinawa has harbours, and Kadena Air Base had already been built by the Japanese.
This required taking all of the Ryukyu Islands, and that's what was done. The Battle of Okinawa is the famous part of this campaign, because the Japanese concentrated their resistance there, knowing that while they held the main island, the other islands would be of limited use. You can't set up mobile fleet bases while the enemy are still within artillery or small-boat attack range. The other islands in the group were taken comparatively easily, so the combats are not famous.
Sources: Okinawa, 1945: Final Assault on the Empire, Simon Foster, 1996 and Okinawa: The Last Battle, the relevant volume of the US official history, available here.
Addendum: Of course, once you have a base, other uses for it emerge. In July 1945, Halsey's Third Fleet attacked the Tokyo area, and then, as best the Japanese could tell from radio intercepts and direction-finding, moved south. This was confirmed when carrier aircraft attacked Kyushu, the southernmost main island of Japan, and Japanese aircraft were moved south for a counter-strike on the Third Fleet. But it wasn't there. The radio intercepts had been staged from the USS Tucson, which had separated from the fleet, carrying radio operators from Halsey's staff and sailed south, imitating Third Fleet's traffic. The carrier aircraft had flown from Okinawa.
Third Fleet was located again when it attacked steel plants and rail ferries in Hokkaido and northern Honshu, and the Japanese were unable to retaliate effectively. This raid sank eight and damaged four of the twelve rail ferries that carried coal from Hokkaido to Honshu, cutting the amount of coal that could be transported from the mines in Hokkaido to industry in Honshu by 80%, and crippling Japanese war production. Source: Holt, The Deceivers, pp. 769-770.
A quick glance at p.419 of https://history.army.mil/html/books/005/5-11-1/CMH_Pub_5-11-1.pdf might help improve this answer. It would be a shame to waste all those lovely up-votes. :-)
@AgentOrange: Better?
"...the last night of the war." Was this the bombing of Tsuchizaki, which at least in recent years is currently a part of Akita City?
(1) Okinawa was the key island in the Ryukyu group, with the important port, anchorages, existing airbases, and Japanese defenses, as you said. (2) The US wanted bases in the Ryukyus for the invasion of Japan, and for the projection of power into Asia (ie. China), so the B-29 basing was important. (3) Okinawa itself was initially deemed unsuitable for airbase development and a small number of other islands in the group were intended to be occupied in follow-up operations for airbases, including the B-29 base. It was never intended to occupy all the islands in the group.
(4) After Okinawa and Ie Shima were occupied it was discovered Okinawa was in fact very suitable for airbase development, and so follow-up operations were cancelled. Only one additional island was occupied (for an early-warning station). The number of airbases scheduled for Okinawa was increased from 8 to 18, including the B-29 base. So I think only three islands in total were occupied in the Ryukyu group. Which brings us back to the original question... I think the importance of the naval bases, and the elimination of the key Japanese defenses in the group were probably the actual reason.
Okinawa is the largest (by far) of the Ryukyu Islands. Given the importance of these islands, as discussed in the rest of the answer, this made Okinawa the one to own. That is, Okinawa had room for "runways," harbors, and other facilities, in addition to its strategic importance.
A large part of the importance of the Ryuku Islands stems from the fact that it is a chain of islands that more or less link Japan to Taiwan, and points south and west. With the possession of those islands, America could completely cut off the Japanese Home Islands from its possessions in China. Had the atomic bomb not been dropped, the year 1946 might have featured an invasion of Japan simultaneously with the liberation of Japanese held China.
From an "invasion" perspective, the Ryukyus generally, and Okinawa particularly, were close enough to the main Japanese islands for both ships and short-ranged aircraft to be a menace to Japan, and yet not so close as to meet the "main force" of the Japanese defense. This made them an ideal target for a "preparatory" invasion in 1945.
How important was "room" to the USA? Is there an estimate for how much of Okinawa would have been used?
@philn:Okinawa https://en.wikipedia.org/wiki/Okinawa_Island has ten times the area of Saipan in the the Marianas, which didn't have quite enough "room" for everything the U.S. needed.Okinawa was also the 5th largest island of Japan (after the four main ones).
@philn Initially the US was proposing to build just 8 airbases on Okinawa, as the terrain of the island was regarded as largely unsuitable, and they intended to occupy other islands in the Ryukyus in follow-up operations to allow for additional airbases, including basing for B-29s. Having occupied and surveyed Okinawa they soon realized there was a much greater potential for airbase construction on Okinawa itself than had been anticipated. They cancelled the proposed follow-up operations and increased to 18 the number of airbases to be constructed on Okinawa, including the B-29 field.
|
STACK_EXCHANGE
|
Folks, I don’t know what is going on. Only 48 hours ago I was sending via SES API at 10k messages per hour. No changes to the config.php and today, it has slowed to 40/hr. I restarted all application servers, restarted the database and even checked via MySql that there were no database errors, and Amazon SES is not limiting the send rate. I don’t know what is limiting the application. Any insight would be appreciated. Thanks.
How are you connecting to SES: smtp or api?
Are you domain throttling? If so, that might slow it down.
Hi Dan, I’m connecting via API
Here’s something I found with how PHP List is connecting to my database. I think something is corrupt. Here are the following things I’ve done and I still only have 30 msg/hr
- curl is installed and working
- both SMTP and API have the same result
- database has been updated to innoDB from MyISAM
- updated to 3.2.7 w/ a clean directory
If there is a field to look for in the database that someone can help me look at, please point me in that direction. Such as which table the config.php settings are populated into. Thanks.
it looks like your phplist_usermessage table was not converted to innodb. If it was, you’ll need to restart your mysql server
I took that shot before conversion. Good point Dan, I’ll do a restart. Restart and reboot the same thing? I have start/restart from the MySql workbench, but from AWS, you can reboot a server.
Update: I don’t know what died, but I’m completely puzzled. There’s something horribly wrong in how the application is managing the database. I’ve A/B tested against another SMTP and it isn’t SES that’s the problem. But something has got to be hosed in the database. Where do I start looking? Do I make a new DB and just import everything back in as tables?
If you can restart the server, that will re-initiailaze everything.
Or, you can restart mysqld, and that will just restart the mysql server.
Once it’s restarted, turn off your phplist, (website), and analyze all of the tables, and repair anything needing repair.
You would also want to modify any tables that are still myisam, converting them to innodb.
One of the differences between innodb and myisam is that myisam locks the whole table whiile modifying data, and innodb only locks a row in the table, the one that is being edited.
So, something did get hosed in the server. My solution was to export all the table data and only import the table data into a new database instance. I connected PHPList to that and it fixed the send rate issue. There needs to be a utility in PHPL to clear the dbase columns that are old or hung up.
BTW, I can’t seem to find the page where you can see your config.php in the front-end UI, can you jog my memory?
So, it has hosed itself once again at the database level. When you process the queue, you get “unable to get lock for processing.”
If I’m live in the database, what can I delete to reset the database sessions? Can I delete all items in the columns or indexes tab? I don’t want to have to import export constantly just to get it going again
|
OPCFW_CODE
|
Why do we need to close a file in C?
Suppose that we have opened a file using fopen() in C and we unintentionally forget to close it using fclose() then what could be the consequences of it? Also what are the solutions to it if we are not provided with the source code but only executable?
If you suspect a problem in an executable file you need to fix it in the source code. If you don't have the source code refer it back to the developer.
If that fopen was done in read-only mode then, while this is very poor coding quality, at least it will not cause any issues with data in the file. But if that fopen was done using a write mode you now have a very good chance of corrupting the data in that file.
The consequences are that a file descriptor is "leaked". The operating system uses some descriptor, and has some resources associated with that open file. If you fopen and don't close, then that descriptor won't be cleaned up, and will persist until the program closes.
This problem is compounded if the file can potentially be opened multiple times. As the program runs more and more descriptors will be leaked, until eventually the operating system either refuses or is unable to create another descriptor, in which case the call to fopen fails.
If you are only provided with the executable, not the source code, your options are very limited. At that point you'd have to either try decompiling or rewriting the assembly by hand, neither of which are attractive options.
The correct thing to do is file a bug report, and then get an updated/fixed version.
Will the system eventually refuse because of memory constraints or is there another reason? For example, assuming we continuously fopen-ed and we had infinite memory, would the system never refuse to open the file?
If there are a lot of files open but not closed properly, the program will eventually run out of file handles and/or memory space and crash.
Suggest you engage your developer to update their code.
The consequences is implementation dependent based on the fclose / fopen and associated functions -- they are buffered input/output functions. So things write are written to a "file" is in fact first written to an internal buffer -- the buffer is only flushed to output when the code "feels like it" -- that could be every line, every write of every full block depending on the smartness of the implementation.
The fopen will most likely use open to get an actual file descriptor to the operating system -- on most systems (Linux, Windows etc) the os file descriptor will be closed by the OS when the process terminates -- however if the program does not terminates, the os file descriptor will leak and you will eventually run out of file descriptors and die.
Some standard may mandate a specific behavior when the program terminates either cleanly or through a crash, but the fact is that you cannot reply in this as not all implementations may follow this.
So your risk is that you will loose some of the data which you program believed that it had written -- that would be the data which was sitting in the internal buffer but never flushed -- or you may run out of file descriptors and die.
So, fix the code.
|
STACK_EXCHANGE
|
One of the most common tasks when managing a firewall is updating or deleting the rule. Deleting a firewall rule should be done carefully because any mistake can expose the server to unwanted traffic.
In this guide, we will learn how to delete UFW rules on Ubuntu.
The user running the UFW commands must have
- A Linux distribution with UFW installed and enabled
- A user account with sudo privileges
Delete a UFW Rule
Before making any manipulation of the UFW firewall, it is important to know the existing rules. List UFW rules help to know existing rules defined on the firewall.
There are two ways to delete UFW rules:
- By rule number
- By specification
1. Delete a UFW Rule by rule number
Deleting the UFW rules by rule number is easier because you only need to specify the rule number to delete.
sudo ufw status numbered
Status: active To Action From -- ------ ---- [ 1] OpenSSH ALLOW IN Anywhere [ 2] 80 ALLOW IN Anywhere [ 3] 443 ALLOW IN Anywhere [ 4] OpenSSH (v6) ALLOW IN Anywhere (v6) [ 5] 80 (v6) ALLOW IN Anywhere (v6) [ 6] 443 (v6) ALLOW IN Anywhere (v6)
Once the number has been identified, you can delete the rule
sudo ufw delete 2
Deleting: allow 80 Proceed with operation (y|n)?
As you can see, you need to confirm the operation.
When a rule is removed by the number, then the order of the other rules also changes. You should be aware that deleting a rule by number does not do it automatically for the IPv4 and IPv6. You will have to manually remove the IPv4 and IPv6 rules as well.
sudo ufw status numbered
Status: active To Action From -- ------ ---- [ 1] OpenSSH ALLOW IN Anywhere [ 2] 443 ALLOW IN Anywhere [ 3] OpenSSH (v6) ALLOW IN Anywhere (v6) [ 4] 80 (v6) ALLOW IN Anywhere (v6) [ 5] 443 (v6) ALLOW IN Anywhere (v6)
Make sure to always check the rules number before any deletion.
2. Delete a UFW Rule by ufw delete Command
The second way to delete a rule is to use the ufw command used to create the rule with the delete option. To be more specific, let's sqy you added a rule that opens port 443 with the command sudo ufw allow 443, to delete it go as below
sudo ufw delete allow 443
Rule deleted Rule deleted (v6)
You see that removing a rule by specification automatically removes the rules for both IPv4 and IPv6
3. ufw disable
This will disable the firewall but will keep all rules in place. Disabling UFW doesn't delete rules from the system.
UFW disable will allow all connections to come through so you need to be careful when using it. The good thing is later when enable, all rules get active again.
sudo ufw disable
Firewall stopped and disabled on system startup
4. UFW Reset
UFW reset will delete all rules and backup to a location. The system will go back to the default firewall state.
sudo ufw reset
Resetting all rules to installed defaults. This may disrupt existing ssh connections. Proceed with operation (y|n)?
It can be useful if you would like to start from zero because of some misconfigurations. You also need to be careful when using this command as it may disrupt your ssh access to the server.
In this guide, we learned how to delete UFW rules in Ubuntu.
Make sure to always keep your ssh rule and don't delete it, otherwise, you can lose your ssh access
|
OPCFW_CODE
|
PEEK and POKE, an "advance" and "dangerous" topic...
March 23 2002 at 7:07 PM
FAQ008 = How do you use PEEK and POKE?
well, not really dangerous, most you can do is crash Qbasic and possibly your computer (by "crash", I mean, you have to restart it.. no permanent [sp?] damage)
What PEEK and POKE do, is read from and write to memory. So, no harm comes from PEEKing at memory, you can PEEK all you want, but you'll want to avoid randomly POKEing at places.
To briefly cover memory arrangement, memory is divided into 65536 segments, or groups of memory. Then, in a segment, you can reference 65536 bytes, each byte being called an offset. (65536 segments and offsets, starting at 0 and ending at 65535)
POKE offset, byte
PEEK(offset) 'returns the byte value
But now, how do you change what segment to access? You use DEF SEG:
DEF SEG = segment
if you just use:
Then it'll reset to QBasic's default data segment, where all variables and strings are stored. And this segment should be what DEF SEG is set to by default, so if I didn't use DEF SEG in my program, that probably was why.
Anyway, an IP consists of 4 numbers, separated by periods (.) and each number ranges from 1 to 255 (0 isn't used). A byte's value, how conveniently, can range from 0 TO 255. So, an IP can be compactly stored in 4 bytes.
If you see an IP represented as a long integer, then, most-likely, if you put the number in a variable and look at the 4 bytes that make up that value, you'll find them to be the IP.
Ok, so how do you read or change the bytes of a variable? Well, you'd have to know the memory address first. All variables are stored in QB's data segment, so we know the segment, but what about the offset? You'll use VARPTR (VARiable PoinTeR) to find the offset.
DIM value AS LONG 'makes a 4-byte (long) integer
offset = VARPTR(value)
INPUT "Enter in a value: ", value
PRINT "Byte #", "Value"
FOR i = 0 TO 3
PRINT i, PEEK(offset + i)
As you can see, bytes (and almost everything else in a computer) are numbered starting at 0, not 1.
Note I declared value as a LONG-integer. SINGLE-percision floating-point values are stored in a very complex manner, that I'm not going to get into. So you should probably limit your exploration to INTEGERs and LONG-integers (integers are the same as LONG-integers, but they are only 2 bytes long).
I'm sure, with some experimentation, you'll quickly see the relation of the bytes that make up a variable and its value. Perhaps you'll venture to figure out negative numbers too.
Now, to give you a safe environment to POKE in. You can't do any damage in the video segment, so you can have some fun there.
The video segment is segment &HA000 (that's hexadecimal). If you POKE in this segment, you'll change the graphics on the screen (if you are in a graphics mode and you don't POKE past the end of the screen [though no damage will be done if you do so, as long as you stay in &HA000]).
To plot a pixel of color c at coordinate (x, y) in screen 13, you simply do this:
DEF SEG = &HA000
POKE (y * 320&) + x, c
The & is on 320 to make it a LONG-integer, since the offset can be over 32767 (the limit of an INTEGER).
Here's a program to play with:
DEF SEG = &HA000
offset = (100 * 320&) + 160 'start at center of screen
col = 15
k$ = INKEY$
IF k$ = CHR$(0) + CHR$(72) THEN offset = offset - 320 'move up
IF k$ = CHR$(0) + CHR$(75) THEN offset = offset - 1 'move left
IF k$ = CHR$(0) + CHR$(77) THEN offset = offset + 1 'move right
IF k$ = CHR$(0) + CHR$(80) THEN offset = offset + 320 'move down
IF k$ = "+" THEN col = col + 1
IF k$ = "-" THEN col = col - 1
POKE offset, col
LOOP UNTIL k$ = CHR$(27)
Notice a few things. First off, to move up or down a row, you +/- 320. The screen is 320 bytes (and pixels) wide in mode 13.
Notice that you can move off the right side of the screen and appear on the other side, but one row down. (same with moving off left side and appearing on the right side one row up).
Notice how you can move off the top of the screen and appear on the bottom. But you don't appear right away on the bottom, you have to press up a few more times before you do. Why is that? Well, the screen is 320*200 = 64000 bytes, but the segment is 65536 bytes long. So, for a couple of rows, you'll be off the screen.
Why are you able to go off the top of the screen and appear on the bottom? Well, QBasic will interpret -1 as 65535 and -2 as 65534, etc. (did you play around with negative numbers in the 4-byte program?) Though notice that you'll get an overflow error if you try to go off the bottom and appear on the top (numbers greater than 65535 give an error).
Oh, this should be obvious, but you can, of course, PEEK from the screen too.
|
OPCFW_CODE
|
I am using this in this time. On my laptop, but please don't tell this to Apple, because practically this is illegal. If you are making simple apps with minimum UI, you can use Theos. Also with Theos you can create cydia tweaks. Only one problem: codesign. I used all of this ways and all is working.
After you created your Certification file, You can upload it to Ionic Pro. You can build. But unfortunately I didn't found another way to upload the. So I decided to use a pay-as-you-go Mac in cloud account you pay only for minutes you are logged in since the time I spend on Mac is very limited few minutes per App publication.
Most framework like React Native and Ionic allows you to built on their server. Meaning that they can help you compile and provide you with and. Both of these are only available on OSX. To overcome this solution you have 2 options that I am aware of. It is vey simple to code into xamarin and make your ios apps by using C code. Learn more about Teams. Build an iOS app without owning a mac? Asked 5 years, 11 months ago.
How to Deploy your App on an iPhone (Updated for )
Active 6 months ago. Viewed k times. Please correct me if I'm wrong. I'm new to mobile development and I would like to develop an app to submit to the apple store. But I am heavily discouraged by the prices of the macs that I am developing the app in mind.
Getting a Flutter app on Linux
Let's say I know exactly what I want and how to code it. Can someone please tell me I'm delusional?
Idris Lokhandwala 51 2 2 silver badges 7 7 bronze badges. Cescy Cescy 2 2 gold badges 10 10 silver badges 21 21 bronze badges.
You need a Mac for serious iOS development. And they are not that expensive after all. And don't forget a handful of iOS devices to test on - apps that didn't get tested on the available hardware generally show deficiencies. The delusional part begins with "I know how to code it" Think of some weeks to get a project running that's worth showing someone.
- Deploy your iOS App on App Store without a Mac.
- How To Submit An App To The App Store (The Right Way)!
- mac microsoft outlook web access.
- copy lightroom catalog from windows to mac.
- Set up an App ID and entitlements!
Polishing it and making it "shop-worthy" will be tough work. I really can only think of the most useless apps i. Xcode is not a compiler - it is only necessary for generating the certificates to submit your app to the AppStore. Let me tell you step by step few years back I was in same situation.
Preparing an iOS app for release
Check this iOS requirements for Xamarin developer. Steps from that page: One : Install exp by running npm install -g exp Two : Configure app. Is this also possible with other frameworks, e. Qt and JavaFXPorts? DanielZiltener I am not familiar with neither, but I think the answer is no. When you're done with this, click "Create". Donald Duck Donald Duck 4, 13 13 gold badges 42 42 silver badges 66 66 bronze badges. The Vm works fine. My solution as below: 1.
Go to appleid. Go application loader, copy and paste the auto-generate PW Done.
There was a comment in the release notes to the effect that the build tooling should handle it. I tried to archive and distribute from VS on Windows, and got a file path length error, so that doesn't seem to work. I did install Xcode This command worked fine for me with the new xCode.
How To Develop iOS Apps On A Windows PC
DuaneCraw Would you be able to share how you installed XCode Update : I have managed to download and install version I am able to use Visual Studio to distribute my app. Still need to get the App Specific password from the apple id account page. If you did, how did you resolve those issues? DuaneCraw Thank you very much for the steps you provided. I had originally installed XCode So instead of installing XCode My previous successful upload was after I had installed XCode I did not have any issues with the path. Path, not including filename. Xamarin Inc. The sebsequent page asks if you want to enroll as an individual, as a company, or as a government organization.
Apple will attempt to confirm this information with your credit card company, so make sure you enter it correctly. Now you will be prompted with the cost and summary for the purchase. You have the option for automatic renewal every year, which saves having to remember to renew and prevents any chance that your apps become unavailable apps will be removed from the store once the account is no longer active.
Note : The following steps only apply to countries with online Apple Stores. For countries without online Apple Stores, the process will be slightly different, requiring you to fax your credit card information to Apple. Still here? Fill out the payment screen. Verify your billing information for the purchase. Finally, confirm your intent to purchase the membership:. At this point you should download Xcode by proceeding to the Apple App Store using the App Store icon on your application dock. Apple places the latest non-beta release in the App Store.
|
OPCFW_CODE
|
Applications and libraries/Generic programming/SyB
- 1 Approach: Scrap your Boilerplate (SyB) and Variants
- 2 Required features/Portability
- 3 Expressibility
- 4 Subset of data types covered
- 5 Usage
- 6 Error Messages
- 7 Amount of work per data type (Boilerplate)
- 8 Extensibility
- 9 Reasoning
- 10 Performance considerations
- 11 Helpful extra features
- 12 Discussion
Approach: Scrap your Boilerplate (SyB) and Variants
For all proposals:
Rank-2 types are supported by GHC and Hugs and are going to be part of Haskell'.
SyB1 and SyB2
type-safe cast (via deriving Typeable)
Type-safe cast is implemented in GHC. However other compilers do not support it and it is not going to be part of Haskell'.
multiple-parameter type classes
Explicit type application
type class abstraction
non-standard instances (undecidable instances (recursive dictionaries) and overlapping instances)
Multiple-parameter type classes are supported in GHC and Hugs and should be part of Haskell'. (Note that this is a rare example where functional dependencies are NOT required. So regardles of the outcome of the FD/AT debate, this extension is likely to be supported in Haskell'.)
Explicit type application is also a relatively minor extension, requiring EmptyDataDecls (also likely to be a part of Haskell') to be encoded.
Type class abstraction could be an extension on its own but there is no Haskell implementation that actually supports it. However, type class abstraction can be emulated using MPTC.
SyB3 also requires the restriction on instance declarations to be relaxed in two ways: undecidable instances allow type classes constraints to be satisfied coinductively (the translation generates a recursive dictionary.)
Secondly, SyB3 relies on overlapping instances to override generic definitions of type-indexed functions for specific types. Overlapping instances are not an essential part of SyB, but they do simplify the use of type-indexed operations.
- Is Haskell' going to relax the constraints on type class instances? If so, will SyB instances be valid Haskell' code?
SyB Reloaded and Revolutions (SyBRR)
GADTs are required for the type representations (Spine view and variants). GHC supports GADTs and they may be part of Haskell'.
We can do both producers and consumers, but we required different representations/operations for both.</li>
To support generic functions of different arities we need, once more, different representations.</li>
No local redefinitions</li>
Subset of data types covered
Supports sums of products and it has been shown to support some forms of GADTs.</li>
Library Writer: Defines the generic machinery: the different views/operations (Data in SyB or Spine and variants in SyBRR); also defines the type representations (the class Typeable in SyB or Type in SyBRR); the higher-order generic combinators like everywhere, everything can also be defined by the library writer.
Power User: This kind of user is only needed in the SyBRR variant, where the type representations for new data types need to be provided manually. In SyB, there is no such need since the compiler (GHC) can automatically derive that code.
User: The user defines generic functions using the infrastructure provided by the library writer. When using SyB, the user can easily support new data types just by appending deriving Typeable and Data to the data type declaration. The user benifits more from
Users of SyB can probably give a better comment about error messages.
what are the kind of type- error messages you get when you make a mistake in your generic function?</li>
Amount of work per data type (Boilerplate)
With SyB, we just need to use the deriving mechanism (for Typeable & Data). If this is not supported, must define operations (gfoldl, gunfold, etc).</li>
With SyBRR, we need to provide the representations and extend the toSpine function.
SyB1/SyB2 are statically extensible.</li>
SyB3 is dynamically extensible.</li>
SyBRR is not extensible. Extensibility would require extensible/open data types.</li>
(Need to define statically vs dynamically extensibility).
Fermin Reig has done some work about reasoning with SyB.</li>
With SyB1/SyB2 there is a performance impact due to the type-safe casts.</li>
SyB3 may have some performance impact due to the fact that generic functions are type-class overloaded. Instance specialization, may help here. </li>
SyBRR is a representation based approach and thus passing representations around will have some impact.
Helpful extra features
TODO (users may have good ideas for this section)
What features could Haskell have that would make the approach easier to use?</li>
Are those features realistically feasible?</li>
|
OPCFW_CODE
|
Heroku button = painless deploys
This Friday I made a few things I think are cool. One was a Dockerfile for a piece of software called indielogin.com, the other was a Heroku deploy button for the same software.
I am a firm believer that one of the best ways to learn, is to get something working. My plan was simple:
- Download the software source (it is written in a scripting language)
- Build one or more docker containers using the scripting runtimes
- Try to get project to work (without services) in a docker container
- Fill in services using docker-compose
- Throw at Heroku and iterate through remaining issues
- Refine solution
- Adjust documentation
- Ask for feedback
Initially I was frustrated because I re-used some code I maintain, which is used to test PHP libraries & project test suites across PHP runtimes. I would forgotten that the software I maintain, is designed to avoid baking software into docker images. A pattern CloudFoundry does not cater for. This was not the only problem; another was that it is built for a local, or internal CI systems, not public web.
Due to this I had to change course to ensure the built Docker image contained the code. This could introduce its own challenges.
- The feedback cycle being longer
- The need to build an image every time you want to try modification or new code.
I overcame the cost of the feedback cycle, by having docker-compose mount the present working directory to the same path as the baked Dockerfile code deploy directory. This means I can change a file and locally get immediate feedback without restarting the container, building a new one, etc.
I accepted that this would push deploy times if locally pushing updates. My primary focus for using this technology is for preview branches and to get people quickly spun up as users of software, rather than maintainers. I opted to avoid the needs of a package maintainer.
I Had a few challenges around logging and code-style of the author, and lack of environment-based source of facts common in cloud-native applications, but overall I am very impressed with how easy it is to deploy Cloud software to Heroku in 2019.
While I am sure I will not win awards for my work, it may even be short-lived and not be the best long-term solution, it does provide some options. "what if you did this?"
Missing technical fulfilment such as the multi-runtime support; I am happy for others to iterate on.
I remain confident my work is straightforward enough that others can learn from it and use it in my absence.
The experience really helped show off how mature software deployment has become since I started writing software in the late 1990's, where a week was considered a short deployment for complex interconnected software.
It made me happy that I was familiar with challenges that presented themselves, technologies' the application used, and it helped me to gain confidence in a platform I don't use much and grow my understanding of its ecosystem.
I feel that I now understand more about this emerging system and the IndieWeb than I did before starting.
I remain aware that there are things I might change. I am confident others can combine wider skills and know-how in this proprietary platform to leverage non-vendor specific knowledge.
I would like to further build on skills from this work to deploy branch-specific builds, perhaps using shared infrastructure, or using repository pattern to avoid service-boundaries altogether without disrupting live environments.
I have a greater appreciation for the outcomes of beautifully crafted platform as a service.
|
OPCFW_CODE
|
Hopefully this is really simple, I have an image with float:left assigned. Then some P tags afterwards that wrap around the image. This is all no-problem. But then if I have a ul list of bullets, they don’t respect the position of the image, and get rendered ontop of/underneath the image! If I then set the ul element to float:left, this works fine, only then any subsequent P tags get stuck to the right-side of the UL elements, instead of going to the next line…
problem visible here:
Click: "Information Architecture"
I’m lost as to why this is happening and no changes in firebug that I make seem to resolve it…
Thanks for any help you can offer!!
It looks like it has something to do with how you position your image, and floating the image(and shadow images), ul elements.
I managed to get the paragraph under the ul element with a clear: both. Im not sure if it works good in IE, sometimes that browser can be a pain………
As needed on a project-by-project bassis. For some projects, we also break down each design/functional requirements by its cost estimate, so that a project’s scope can also be determined by what features are selected.
you can do it inline or just make a new line for it in your css file.
Thank you both for your replies, unfortunetly, neither of these solutions will work programmatically in all cases, because the bullets are sometimes beneath the image, sometimes to the right of it [depending on how much text comes before it] and the P tag cant get a "clear: both" unless all P tags get one, which would stop any of the P tags from wrapping around the image. The reason I can’t add this to just one P tag, is because all of the content is generated, and I can’t be sure if the text in one page is longer than the distance before the image is done, or shorter… having to decide this on a per-block bassis would be a work around, but a pain…
Isn’t there something I am missing? This just seems silly, do I need to wrap all of my P and UL tags in a DIV perhaps? And make the div float left? Hmm… maybe I’ll try that next…
Thanks to anyone else who may have some light-flashes!
pfff, no wrapping it in a div didnt help either… I can get it all to wrap "left" but then the bullet pictures get stuck behind (!?!)* the image and the bullet text properly snugs to the side of the image…
Am I just doing something CSS was never meant to do? This seems like such a basic thing, there must be some way to wrangle the UL tag into compliance…
*: that little blue dot under the image should be the bullet picture to the left of every LI
Thank you for having this discussion! Now I can run lists in posts next to an image. I do wish there were a way to have the items wrap back under the visual once they’ve cleared the bottom (not CSS clear – English clear) but I’ll take what I can get. At least this doesn’t make me look as if people’s machines got broken ina wayback machine from 1995.
Mary Baum, April 2012
Why are you using “float: left;” anyway. If you used absolute positioning then you wouldn’t need to float anything ever. “float: left;” or “margin-top: 5px;”, etc, is like saying, please move this element a little to the left, or a little towards the top, somewhere on the screen. This no longer makes sense when you consider the types of devices that are used to access the internet today. Positioning elements with fixed pixel values, or “float somewhere, but we aren’t sure exactly where”, is pointless. Not all devices have the same resolution, or pixel size for that matter.
It seems that @djdaniel150 has been set out to make our forum comments turn into angry comments with his absolute positioning.
When I think of a load absolutely positioned elements, I just think of Dreamweaver wizards that seem to love putting that particular property into every single div.
You must be logged in to reply to this topic.
|
OPCFW_CODE
|
Display corruption using multimonitor mode starting in Ubuntu 17.10 Radeon X1600
I have an old HP nw8440 laptop with a Radeon X1600 (FireGL) which has served me very well on Ubuntu up to 17.04. 17.10 does NOT like a "tall" monitor on the left.
Here's what's happening.
Internal display: 1900x1200
External Display: 1600x1200 (or 1200x1600 in portait mode)
Problem 1: Changing the video results in corruption of the screen, requiring reboot. I can live with that, it's just irritating.
Problem 2 (the real problem) I cannot put the portrait mode on the left side.
Problem 2a: The icons are limited to the height of the shortest monitor.
Problem 2b: The pointe will "shadow" on to the second screen as if the displays overlap.
Desired solution: Behaving system with the tall monitor on the left.
Screenshot explanations:
This arrangement works (tall monitor on right side)
Right side tall monitor, works great
When I move the monitor to the left side, the screen corrupts.
Left side tall, corrupted.
I can reboot to get rid of the corruption, but the icons are stuck at the wrong height.
Left Side wrong icons
Also, I will get a "shadow" or "echo" of the pointer on the left side monitor when it is in certain regions of the main screen. The echo is not rotated to the orientation of the monitor, and is not captured in the screenshot (only the main pointer is captured in the screenshot, not the echo). his is probably a third problem.
The same thing happens on the Unity windows manager or Xorg manager.
So, how might I report or characterize this bug or set of bugs?
If this happens in both Wayland and Xorg, you probably should start by filing a bug against the linux package in Ubuntu, as it seems like it may be an issue in the kernel driver side.
@Phillip I've got exact the same issue! Are you having any success?
None. I can stably run in Xorg mode with both monitors in landscape, using a reboot after each config change. It's not pretty, and I can't get mixed portrait and landscape. One of the answers points to a bug report.
How would I identify the appropriate upstream component with the problem?
For starters, try installing one more DE, e.g. KDE, check if it has the problem. It'd allow to determine if the problem is likely with Unity, or somewhere between driver and XServer.
It fails under Unity and Xorg - does that count? If not, I'll try KDE.
@PhillipRemaker by "Xorg" do you mean you've tested it with xinit ? Or what?
I was able to solve this problem by making a fresh installation, upgrading to 64-bit 18.04 "Bionic Beaver" which properly supports multi-orientation multi-monitor on the HP nw8440 x1600 "FireGL" video.
Also, for some reason, using photos as background for lock screen and main screen prevented display corruption. When using the "Bionic Beaver" logo for lock and desktop screens, power save and lock screen returned to highly corrupted screens. I have no idea the root cause there, but using a photo background is a simple workaround against display corruption.
There may be some series of patches to make 17.10 32-bit work, but the move to 18.04 completely fixed it. There is some display corruption on the default wallpaper, but moving to a different wallpaper fixed that.
On an unrelated note, I couldn't enable the Wi-Fi in 18.04 until I went into BIOS and disabled WLAN, rebooted, re-enabled WLAN, rebooted, and enabled simultaneous LAN/WLAN in BIOS. Not sure which step solved the problem, but for the iwl3945 or iwlegacy driver, hey look more carefully at BIOS setting. Symptom was that the phy0 was "hard blocked" when looking at rfkill list all.
I have got a similar experience. I played around with the display settings while doing a random number of reboots. Now I got it managed. One screen in landscape mode on the left hand side, the other one on the right hand side in portrait mode. I think I should never touch this setting again until there is a valid update.
Regarding the mis-behaviour on the screen placement I found a Bug ticket here:
https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/1240929
Maybe you should ask there where to report the other bugs.
How would I identify the appropriate upstream component with the problem?
FYI: This looks fixed in 18.04. See my answer to this question.
|
STACK_EXCHANGE
|
At sektor5 we host a number of (developer) user group meetups. And we have done so for quite some time. Personally I co-founded the Ruby User Group in Vienna – vienna.rb – amongst other (study) groups. We thought it about time to share some tips & tricks on running a successful meetup group. And how to get goodies for your members.
The vienna.rb team keeps in touch via a private mailing list. We all have access to a GitHub repository to maintain the website and host the slides and we share custody over the meetup.com group and the Twitter account. In practice you’ll find out soon enough who likes doing what best. Hustling sponsors, getting people to do a talk, making pictures, writing recaps, updating the social channels. We are in contact at least once a week as we publish our ‘picks’ (favorite new gems, new releases and other funsies) every Wednesday, which is a great way to stay up to date. Yay for continuity.
Do your homework, check what development shops are looking to hire people and ask them to sponsor your event. Keep an eye on various job boards (startus.cc, viennastartupjobs.com or karriere.at). For group of forty thirsty developers you need about 200 euro (excluding tax). Github usually helps you celebrate the little big moments, like hitting the 200 meetup.com member mark. Contact them via their community platform and make sure to mention the special occasion.
Having a handful of speakers who will frequently commit to giving a talk is gold. Tell the audience at the meetup that the call for proposals is always open and you’d love to hear from them. And contact people in your field who are active and opinionated on Twitter and on their blog.
Make sure (all) your talks are in English. In German speaking countries this is still often times not the norm, but in a big city like Vienna there are many expats who will otherwise feel excluded from your meetup. Your speakers can prepare for that conference talk they want to give one day. And let’s be honest, most communication in the developer world is in English anyway.
Get someone high profile in, every now and then. Keep an eye on conferences taking place and invite international speakers who are around for these conferences to talk at your user group meetup. Offering them a place to stay and a meal to eat often times suffices.
There’s a few things you’ll need to have in your toolbelt in order to throw a user group meetup. First off: you’ll need a location, preferably free of costs. Co-working spaces are usually interested in hosting developer meetups as their members are always looking to hire coders. You could also contact companies you know are looking to hire, or ask your employer to ‘give back to the community’ by letting you host a user group.
Then you will need a sufficient amount of chairs, steady Wifi, a projector, adapters for various laptops and a functional ventilation system. Believe me, the latter is more important than you might think.
You will need a page on meetup.com, lanyrd.com, a Facebook page or a website… anything, some way to get people to join your little community and RSVP to your events – so you will actually know how big a group to cater to.
Keep your Twitter account and website active aggregating news (in your area). Chances are you regularly look at Hacker News and the likes anyway, why not share valuable content with your followers? The vienna.rb team stays in touch with people and companies that sponsored or talked at one of our meetups, so we’ll hear about their newest features / endeavors first. Creating more contact moments than around meetups only helps building a community.
Attending other user groups you might also scout speakers and topics interesting for your community.
Arrange some give away’s for your members and speakers.
– Ask your sponsor(s) or a local startup (or Github) for stickers, shirts, gadgets or licenses
– Sign up for the O’REILLY User Group program to get some books (and shirts!) to raffle out at your meetup
– Manning has a similar program, clickety-click to sign up
– … and so does the Pragmatic Programmer – contact them at firstname.lastname@example.org
– Jetbrains often sponsors licences for their IDE
– Roll your own, create your own logo and print it on stickers or tote bags
Extra Pro tips:
– Promote you user group at (beginners) workshops to ensure your community to grow beyond the familiar faces
– Contact a sponsor or someone who studies ‘something’ with audiovisual stuff to get a livestream / recording going. Attending a meetup for the first time is a whole lot less scary when you know how it’s going down from the video(‘s) you have watched.
|
OPCFW_CODE
|
What is Blockchain Technology?
In this whitepaper, I first give a high-level overview of blockchain technology and make a case for blockchain as security theater for privately permissioned digital ledgers. According to the National Institute of Standards in Technology from the US Department of Commerce, blockchains are “immutable digital ledger systems implemented in a distributed fashion (i.e. without a central repository) and usually without a central authority. At its most basic level, they enable a community of users to record transactions in a ledger public to that community such that no transaction can be changed once published”. This means one cannot host a privately permissioned digital ledger and still call it a “blockchain”.
Illustration from “On Distributed Communication Networks”, showing three types of networks according to Baran.
Google operates the largest ad exchange in the world and decided to start investigating the use of blockchain technology in its exchange to root out “fraudulent actors.” Google’s claim that they started integrating blockchain technology into their exchange is nothing but security theater since they would essentially write themselves out of the deal between advertisers and publishers if it were to create an immutable, open, distributed public ledger for its exchange. We hear a lot of hype around “Blockchain” technology and it is important to temper this hype with the realities of implementation for this technology. Not everything can be put into a blockchain and not everything should. This blog post describes a high-level overview of blockchain (appendix) technology and how it pertains to Google’s ad exchange.
What is blockchain technology? Andreas Antonopoulos, author of Mastering Bitcoin, eloquently breaks blockchains down further into 5 pillars:
- Open: Anyone can access it, and participate in it without authorization, ID, ethnic origin, etc. Blockchains do not know if you are human or a piece of software when you use it.
- Borderless: It does not matter where you are, where you live, or where you travel. The blockchain is always there.
- Neutral: Anyone can make an exchange. The purpose of the exchange and the identity of the sender and receiver are not regulated.
- Censorship Resistant: If someone wants someone on a blockchain to stop a transaction, they cannot. No transaction can be censored.
- Public: The idea is that every exchange on the blockchain is verifiable on the network.
When is Blockchain Technology Security Theater?
Blockchain technology could be used to record the transfer of ad space on various sites. It could also be used to monitor user actions such as clicks and supplier site data (lead generation, clicks, re-directs). The advantage would be that it is publicly auditable and Supply Side Platforms (SSPs) do not have to trust Google to know how much is owed to the publisher. In creating a truly distributed, transparent, open, and auditable ledger, the ad exchange is essentially writing themselves out of necessity. For the blockchain to be secure, it must be open (otherwise the DSPs and SSPs are still simply trusting Google’s exchange). For the ad exchange to be open, their monetization would be cut out eventually, since they are just middlemen between the SSP and the DSP.
Google claims to “root out fraudulent actors” by implementing blockchain technology into its ad exchange. If this were true, there would be fewer attack vectors for click fraud. In reality, anyone can access a true blockchain and participate in it without authorization or identification. Blockchains do not know if the user is a human or a piece of software. To mitigate this, Google could try using a public key encryption scheme based on tokens integrated into devices within their control (such as android phones). However, they would have less control over other platforms like internet explorer, IOS devices, and countless other platforms. Effectively, using blockchain technology is not truly going to address fraudulent actors any better than not using blockchains.
It can be argued that using blockchain technology would help advertisers on the Demand Side Platform (appendix) to audit charges on traffic to their sites. However; it is not in Google’s interest to prevent click fraud unless they have a way of capitalizing on this security feature. Only the companies trying to advertise their products on the Demand Side Platform (appendix) want this protection and are often best served by purchasing third party Adwords click fraud prevention software. In reality, click fraud benefits Google just as much as the publisher hosting the ads on the Supply Side Platform (SSP). The more click fraud, the more money the exchange makes as well, which is why Google benefits primarily from using a permissioned blockchain for ad exchange purposes.
A blockchain is a computationally expensive data structure that is bigger and slower than a database. If Google decides to implement a private, permissioned blockchain to keep its ad exchange relevant, it will simply slow transactions down with no advantages over their current system, as DSPs and SSPs would not be allowed to see or write to the blockchain. Even Google’s most senior advertising executive seems uncertain about implementing blockchain technology as he says “[blockchain technology] is a research topic, so I don’t have anything super-definitive to say. We have a small team that is looking at it. The core blockchain technology is not something that is super-scalable in terms of the sheer number of transactions it can run,” Sridhar Ramaswamy, Google’s senior vice president of ads and commerce.
Below, you will find a flow chart from a peer-reviewed paper that presents their structured methodology for determining whether or not a blockchain is the appropriate technical solution to solve a given problem. If we follow its logic, we find that Google benefits most from keeping its image as a Trusted Third Party (TTP), not from creating a blockchain that would reduce dependency on a trusted third party. SSPs and DSPs might want to create a blockchain, but blockchains are too computationally expensive scale to the entire world. Therefore, they also do not benefit from a blockchain to keep track of their ledgers.
A flow chart for determining whether or not a blockchain is the appropriate technical solution to solve a given problem.
In creating a platform that is truly distributed, transparent, and auditable, the exchange (Google) would be abdicating their power and role as the broker between the SSP and DSP. For the blockchain to be secure, it must be open, and for it to be open, the ad exchange’s monetization would be cut out eventually. Otherwise, the DSPs and SSPs are still just trusting Google for exchange management via a private permissioned blockchain. If the technology exists for this blockchain, the need for a trusted third party is eliminated. Google is claiming they are creating a more secure exchange platform via blockchain technology, when in reality, they are using security theater in order to continue establishing themselves as a Trusted Third Party (TTP). In an ideal world, there will be decentralized autonomous applications that replace the need to rely on third parties like Google, but they will certainly not be built by the beneficiaries of the current power structure.
- Blockchain: Immutable digital ledger data structure implemented in a distributed fashion (i.e. without a central repository) and usually without a central authority. At its most basic level, they enable a community of users to record transactions in a ledger public to that community such that no transaction can be changed once published.
- Demand Side Platform: Advertisers trying to market their products will find a demand-side platform to help them create ad campaigns and target specific demographics and audiences. The advertisers connect to DSPs primarily for ease of use since exchanges can be cumbersome to set up on their own.
- Supply Side Platform: Publishers trying to monetize on advertisements connect to supply-side platforms that support various apps and websites. These platforms help connect advertisers to an exchange like Google AdWords.
- Writers: Refer to entities with write access to the database/blockchain, i.e. in a blockchain setting, a writer corresponds to consensus participants.
- Trusted Third Party (TTP): TTPs can function as a certificate authority.
- Private vs Public blockchains: Public and private permissioned blockchains differ in that a public blockchain allows anyone to read the contents of the chain and thus verify the validity of the stored data, while a private blockchain only allows a limited number of participants to read the chain. Note that for any blockchain-based solution it is possible to make use of cryptographic primitives in order to hide privacy-relevant content.
- “Google Sees Future of Advertising in Blockchain”, PYMNTS.com: 22, March 2018. https://www.pymnts.com/google/2018/google-sees-future-of-advertising-in-blockchain/
- “Blockchain technology Overview: National Institute of Standards and Technology, US dept of Commerce, January 2018. https://csrc.nist.gov/CSRC/media/Publications/nistir/8202/draft/documents/nistir8202-draft.pdf
- Mastering Bitcoin, Andreas Antonopoulous: https://github.com/bitcoinbook/bitcoinbook/blob/develop/ch02.asciidoc
Further Reading on Blockchain and How it Can Impact Marketing:
- Forbes Community Voice: “Why Marketers Should Pay Attention To Blockchain” https://www.forbes.com/sites/forbescommunicationscouncil/2018/01/24/why-marketers-should-pay-attention-to-blockchain/ (Jun 2018)
- Forbes: “10 Ways Blockchain Could Change The Marketing Industry This Year” https://www.forbes.com/sites/forbesagencycouncil/2018/02/27/10-ways-blockchain-could-change-the-marketing-industry-this-year/#1a6066c348ba (Feb 2018)
- HBR: “What Blockchain Could Mean for Marketing” https://hbr.org/2018/05/what-blockchain-could-mean-for-marketing (May 2018)
- Ramp Up: “How Blockchain is Changing Digital Marketing” https://rampedup.us/blockchain-digital-marketing/ (Feb 2018)
|
OPCFW_CODE
|
TsMorphMetadataProvider doesn't find source file
Describe the bug
When using TsMorphMetadataProvider, my application no longer starts because some source file is apparently missing.
Stack trace
(node:14672) UnhandledPromiseRejectionWarning: Error: Source file for entity './dist/entities/CoreEntity.js' not found, check your 'entitiesTs' option. If you are using webpack, see https://bit.ly/35pPDNn
at TsMorphMetadataProvider.getSourceFile (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:80:19)
at TsMorphMetadataProvider.getExistingSourceFile (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:26:28)
at TsMorphMetadataProvider.getExistingSourceFile (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:23:89)
at async TsMorphMetadataProvider.readTypeFromSource (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:58:24)
at async TsMorphMetadataProvider.initPropertyType (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:48:36)
at async TsMorphMetadataProvider.initProperties (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:33:17)
at async TsMorphMetadataProvider.loadEntityMetadata (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-reflection-virtual-84f2ec7642/0/cache/@mikro-orm-reflection-npm-4.0.0-rc.0-d5f10a9522-1e54d859bc.zip/node_modules/@mikro-orm/reflection/TsMorphMetadataProvider.js:19:9)
at async MetadataDiscovery.discoverEntity (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-core-virtual-3ad60f3fc1/0/cache/@mikro-orm-core-npm-4.0.0-rc.0-779fca6c16-4355f3c4f9.zip/node_modules/@mikro-orm/core/metadata/MetadataDiscovery.js:164:13)
at async MetadataDiscovery.discoverDirectories (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-core-virtual-3ad60f3fc1/0/cache/@mikro-orm-core-npm-4.0.0-rc.0-779fca6c16-4355f3c4f9.zip/node_modules/@mikro-orm/core/metadata/MetadataDiscovery.js:103:13)
at async MetadataDiscovery.findEntities (/home/oliver/fairmanager/backend/core-web3/.yarn/$$virtual/@mikro-orm-core-virtual-3ad60f3fc1/0/cache/@mikro-orm-core-npm-4.0.0-rc.0-779fca6c16-4355f3c4f9.zip/node_modules/@mikro-orm/core/metadata/MetadataDiscovery.js:65:9)
(Use `node --trace-warnings ...` to show where the warning was created)
To Reproduce
Steps to reproduce the behavior:
Switch from ReflectMetadataProvider to TsMorphMetadataProvider?
Expected behavior
My application starts as before when using the ReflectMetadataProvider.
Additional context
The directory for entitesTs is properly declared as far as I can tell. The class being complained about here is an abstract base class for other entities.
Versions
Dependency
Version
node
14
typescript
3
mikro-orm
4
your-driver
?
So how do you define entities and entitiesTs? How do you run the project? Node or ts-node? Do you have declaration files enabled?
I'm defining them as:
this.orm = await MikroORM.init({
debug: true,
baseDir: __dirname + "/..",
entities: ["./dist/entities"],
entitiesTs: ["./src/entities"],
});
I'm running the project through node with /usr/bin/node --unhandled-rejections=strict --require /home/oliver/fairmanager/backend/core-web3/.pnp.js /home/oliver/fairmanager/backend/core-web3/dist/app.js.
I don't have declaration files enabled unless they are enabled by default. I'm not sure what the term is referring to in this context.
In v4 you need to have them, its the .d.ts files, next to the compiled js files. Enable them in your tsconfig.json.
Ah, yes. Now I see. I also started stepping into the implementation and saw it is looking for '/home/oliver/fairmanager/backend/core-web3/dist/entities/CoreEntity.d.ts'. I'll see what I can do. Thanks
Yeah this has changed, in v3 the real TS source files were used, now it uses the .d.ts files. Will improve the error message.
Awesome. That was it. How would you feel about a note regarding this TS compiler setting around https://mikro-orm.io/docs/next/installation#entity-discovery-in-typescript or https://mikro-orm.io/docs/next/metadata-providers/#tsmorphmetadataprovider ?
Yeah I would say both places should have a note about this.
|
GITHUB_ARCHIVE
|
Zoom control icons disappear after configuring zoomControlOptions position
Issue description
zoomControlOptions position is not working
Steps to reproduce and a minimal demo of the problem
<agm-map
[latitude]="lat"
[longitude]="lng"
[zoom]="zoom"
[zoomControl]="true"
[zoomControlOptions]="{position: 'TOP_LEFT'}"
(mapClick)="mapClicked($event)"
(mapReady)="onMapReady($event)">
_Use https://plnkr.co or similar -- try this template as a starting point: http://plnkr.co/edit/YX7W20?p=preview
https://stackblitz.com/edit/angular-google-maps-demo-7cmc4g?file=app/app.component.html
What steps should we try in your demo to see the problem?
Current behavior
Zoom controls disappear
Expected/desired behavior
Should show on the position specified
angular2 & angular-google-maps version
angular 5.0, latest angular-google-maps version
Other information
I've never used this option myself and I don't currently have a handy Angular project with me where I can quickly try this, but upon inspecting the source code, I see that ZoomControlOptions (the type you're supposed to give it) is using ControlPosition, which is an enumeration, not a string; you're giving it a string.
thx, lazarljubenovic. I am new to typescript.
Can you give me some code example how to pass the right param?
I want to place the controls RIGHT_TOP
Need to import googlemap types
npm install @types/googlemaps
Then in the component import the types.
import { } from 'googlemaps';
Then need to define the position for use
zoomPosition: google.maps.ControlPosition = google.maps.ControlPosition.TOP_RIGHT;
Then pass that through interpolation to the element
[zoomControlOptions] = "{position: zoomPosition}"
Works when I try it in your stackblitz example above.
@timcblank Many thx for your help. I got 'ReferenceError: google is not defined' error. How to fix it?
"google" is defined as a type from the types install with the project. You get access to those types from import. Make sure you've also included the AgmModule with your app module and any lazy loaded modules using AGM correctly as well. That info should be part of the getting started bit on the website for this library.
@timcblank thx for the tips.
Adding AgmModule to the app module didn't fix the issue.
Only after I added to the index.html. It started working. But adding to the index.html isn't the right way, is it?
I use Agm with ionic 3.x. To help other people. Here is what I did:
npm install @types/googlemaps
ts.
import { } from 'googlemaps';
declare const google: any;
zoomPosition: any //Defined the variable above the constructor
onMapReady(map) {
google.maps.ControlPosition = google.maps.ControlPosition.TOP_RIGHT;
this.zoomPosition = google.maps.ControlPosition;
}
3.html:
<agm-map
[latitude]="lat"
[longitude]="lng"
[zoom]="zoom"
[zoomControlOptions] = "{position: zoomPosition}"
(mapClick)="mapClicked($event)"
(mapReady)="onMapReady($event)">
@timcblank thanks again. People like you deserve lots of respect.
@lolaswift not sure how you got it working
it is not working for me
this is what i did
import { } from 'googlemaps';
declare const google: any;
zoomPosition = google.maps.ControlPosition.TOP_LEFT
i still get 'ReferenceError: google is not defined'
any help will be appreciated!
thanks
@sumitdaga
You need to run it in your mapReady function.
onMapReady(map) {
map.setOptions({
zoomControl: 'true',
zoomControlOptions: {
position: google.maps.ControlPosition.RIGHT_CENTER
}
});
}
Don't forget to have mapReady on your template like this:
<agm-map
[latitude]="lat"
[longitude]="lng"
[zoom]="zoom"
(mapClick)="mapClicked($event)"
(mapReady)="onMapReady($event)">
@lolaswift
Thanks a lot ! ...it worked !
still curious though, how the stackblits example of timcblank worked !
thanks anyway
@sumitdaga
I am not sure. I guess it has something to do with lazyloading. You can only set those options when the map is ready.
All googlemap types are imported through the command
import { } from 'googlemaps';
You don't need to declare the variable google at all. It becomes available inside the MapsAPILoader load function or onMapReady function or inside a function called from the map element on callback. You can't use the google types outside that.
All you need is to:
import { ControlPosition } from '@agm/core/services/google-maps-types';
and then:
zoomControlOptions: {
position: ControlPosition.TOP_LEFT
}
thanks! @shoudaos
this makes more sense!
@shoudaos. Many thanks!
➡️ Im closing this because I don't see any bug. Feel free to reopen if you still think there's a bug or comment below.
@SebastianM I am alson stuck with this issue
<agm-map #gm [latitude]="center?.lat" [longitude]="center?.lng" [zoom]="zoom" [usePanning]='true'
[zoomControlOptions]="{position: 'TOP_LEFT'}">
can you please give a right example
@mahfuzur . I do it like this
ts:
import { ControlPosition } from '@agm/core/services/google-maps-types';
onMapReady(map) {
map.setOptions({
zoomControl: 'true',
zoomControlOptions: {
position: ControlPosition.TOP_LEFT
}
});
}
html:
<agm-map [latitude]="lat" [longitude]="lng" [zoom]="zoom" (mapClick)="mapClicked($event)" (mapReady)="onMapReady($event)">
I feel like this is an issue, or the documentation should be updated. The documentation tells me I can do
[zoomControlOptions]="{position: LEFT_TOP}"
Instead I have to do
public mapReadyHandler(map): void {
this.map = map;
this.map.setOptions({
zoomControlOptions: {
position: google.maps.ControlPosition.LEFT_TOP
}
});
}
@mahfuzur . I do it like this
ts:
import { ControlPosition } from '@agm/core/services/google-maps-types';
onMapReady(map) {
map.setOptions({
zoomControl: 'true',
zoomControlOptions: {
position: ControlPosition.TOP_LEFT
}
});
}
html:
<agm-map [latitude]="lat" [longitude]="lng" [zoom]="zoom" (mapClick)="mapClicked($event)" (mapReady)="onMapReady($event)">
thanks you! this solution helped me
I understand this post is closed, but for anyone looking for a solution to move the control position here is it:
public getMapInstance(map) { this.map = map; map.setOptions({ zoomControl: 'true', zoomControlOptions: { position: google.maps.ControlPosition.LEFT_TOP } }); }
don't follow @lolaswift or @diomededavid 's advice ;)
set position to ControlPosition.TOP_LEFT, importing ControlPosition from @agm/core
If all else fails, you can look up the position value from here:
TOP_LEFT: 1
TOP_CENTER: 2
TOP: 2
TOP_RIGHT: 3
LEFT_CENTER: 4
LEFT_TOP: 5
LEFT: 5
LEFT_BOTTOM: 6
RIGHT_TOP: 7
RIGHT: 7
RIGHT_CENTER: 8
RIGHT_BOTTOM: 9
BOTTOM_LEFT: 10
BOTTOM_CENTER: 11
BOTTOM: 11
BOTTOM_RIGHT: 12
CENTER: 13
and set position to the number value.
In AGM2.0 this will be addressed comprehensively
|
GITHUB_ARCHIVE
|
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
from lib.utils import CppTokenizer
class FunctionState(object):
"""
Utility object for handling the building of our functions
for various state control
"""
class Container:
opposites = {
'{' : '}',
'[' : ']',
'<' : '>',
'(' : ')'
}
def __init__(self):
self.char = None
self.count = 0
@property
def valid(self):
return self.char is not None
def is_close(self, other):
return self.opposites[self.char] == other
# -- Lookup States
STATIC_OR_VIRTUAL = 0x0000001
IS_CONST = 0x0000010
TYPE = 0x0000100
NAME = 0x0001000
ARGS = 0x0010000
ADDENDUM = 0x0100000
IMPL = 0x1000000
def __init__(self):
self._static_or_virtual = None
self._is_const_result = False
self._type = []
self._type_and_name = []
self._args = []
self._addendum = None
self._impl = None
self._valid = False
self._container = self.Container()
self._lookup_state = self.STATIC_OR_VIRTUAL
@property
def valid(self):
return self._valid
def _resolve(self, token):
"""
Given a token and the information gathered so far,
We move through states based on what should proceed one item
after another.
:param token: The token that we're looking to utilize
"""
if self._lookup_state == self.STATIC_OR_VIRTUAL:
if token in ('', ' '):
return
self._lookup_state = self.IS_CONST
if token in ('static', 'virtual'):
self._static_or_virtual = token
return # Vital! We comsumed this!
if self._lookup_state == self.IS_CONST:
if token in ('', ' '):
return
self._lookup_state = self.TYPE
if token == ('const'):
self._is_const_result = True
return # Consumed!
if self._lookup_state == self.TYPE:
# When we enter the type lookup, this is the
# first time we might have multiple tokens to consume
# Check for encapsulation
if token in ('<', '('):
if not self._container.valid:
self._container.char = token
self._container.count = 1
elif self._container.char == token:
self._container.count += 1
if self._container.valid and self._container.is_close(token):
self._container.count -= 1
if self._container.count <= 0:
# Terminus
self._container.char = None
if not self._container.valid and token in ('const', '{', ';', '='):
#
# We're out of scope should have reached the end of the type,
# name, and args. Because of this, we now have to filter
# backwards to find the name and args, splitting them from
# the type
#
scope_count = 0
first_scope = True
rem_count = 0
found_scope = False
for rev_token in self._type_and_name[::-1]:
rem_count += 1
if first_scope and (rev_token == ' ' or rev_token.isalnum()):
continue
if rev_token == '=':
found_scope = False
if rev_token == ')': # Remember, we're in reverse
found_scope = True
if scope_count >= 1:
self._args.append(rev_token)
first_scope = False
scope_count += 1
elif rev_token == '(':
found_scope = True
scope_count -= 1
if scope_count >= 1:
self._args.append(rev_token)
elif scope_count >= 1:
self._args.append(rev_token)
elif scope_count == 0:
self._name = rev_token
self._valid = found_scope # If we've made it here, we should be good
break
self._args = self._args[::-1] # Went in backwards
self._type = ''.join(self._type_and_name[:-rem_count])
#
# Make sure we take care of the terminal token.
#
if token == 'const':
self._addendum = 'const'
if token == '{':
self._container.char = token
self._container.count = 1
self._impl = token
self._lookup_state = self.IMPL
else:
self._type_and_name.append(token)
elif self._lookup_state == self.IMPL:
if not self._container.valid:
return
if token == '{':
self._container.count += 1
if token == '}':
self._container.count -= 1
if self._container.count <= 0:
# We've terminates
self._impl += token
self._container.char = None
else:
self._impl += token
def to_dict(self):
return {
'static_or_virtual' : self._static_or_virtual,
'is_const' : self._is_const_result,
'type' : self._type.strip(),
'method' : self._name.strip(),
'args' : ''.join(self._args),
'addendum' : self._addendum,
'impl' : self._impl
}
@classmethod
def from_text(cls, view, text):
state = FunctionState()
izer = CppTokenizer(view, use_line=text)
with izer.include_white_space():
for token in izer:
state._resolve(token)
__import__('pprint').pprint(state.to_dict())
print (state.valid)
return state
function_string = "std::string type() const override;"
# function_string = 'virtual foo<bar<baz, std::function<void(const QString &)>>> my_foo(QString blarg = "faz", foo<bar(kattt)> ok);'
fs = FunctionState.from_text(None, function_string)
# f = """{
# heelp.clean();
# my grod = foo.bar();
# {
# "okay";
# }
# }
# """
# import re
# output = ''
# ws_finder = r'((\s)+)?'
# for line in f.split('\n'):
# trim = line.strip()
# output += ws_finder + re.escape(trim) + ws_finder
# print (output)
# print (re.match(output, f))
|
STACK_EDU
|
Problem with using max connections limit
Problem
Both BaseRedisBroker and RedisScheduleSource have argument max_connection_pool_size which is passed to ConnectionPool. However, ConnectionPool implementation throws redis.exceptions.ConnectionError when maximum amount of connections is exceeded. This exception is not caught and bubbles all the way up, which kills the scheduler (and
broker).
# Minimal working example (with scheduler)
import asyncio
from taskiq.scheduler.scheduled_task import ScheduledTask
from taskiq_redis.schedule_source import RedisScheduleSource
def get_scheduled_task():
return ScheduledTask(
task_name="test_task", labels={}, args=[], kwargs={}, cron="1 1 0 0 0"
)
source = RedisScheduleSource("redis://<IP_ADDRESS>:6379", max_connection_pool_size=5)
async def subtest():
task = get_scheduled_task()
await source.add_schedule(task)
print("task added")
await source.delete_schedule(task.schedule_id)
print("task deleted")
async def test():
await asyncio.gather(*[subtest() for _ in range(10)])
if __name__ == "__main__":
asyncio.run(test())
Suggestions
I found out that redis provides redis.asyncio.BlockingConnectionPool which waits for connection instead of throwing the exception. There's a configurable timeout (after which the exception is raised). Despite the name, the asyncio variant of BlockingConnectionPool does not actually block the whole program, context is correctly switched on async sleep.
We could leverage this class to provide easier processing of max connections limit. Otherwise, user would need to override taskiq-redis classes and replace ConnectionPool with BlockingConnectionPool manually.
I see following possibilities:
Add new argument connection_pool_cls: Type[ConnectionPool] for RedisScheduleSource and BaseRedisBroker. This would contain any ConnectionPool subclass (including BlockingConnectionPool). This is the one I prefer.
Add new argument connection_pool: ConnectionPool for RedisScheduleSource and BaseRedisBroker. This would contain an instance of any ConnectionPool subclass (including BlockingConnection). The URL would have to be duplicated in this case (passed both to the ConnectionPool instance and RedisScheduleSource itself (even if not used, in order to maintain compatible API).
Add new argument blocking: bool for RedisScheduleSource and BaseRedisBroker. Based on the value, we'd internally decide whether to use ConnectionPool or BlockingConnectionPool. This is the least flexible, because behaviour cannot be easily changed from outside (e.g. by subclassing ConnectionPool).
In all cases, the change can be made backwards compatible (although I'd argue that current behaviour with uncaught exception doesn't make sense and BlockingConnectionPool is a good default). Alternatively, we could:
Change the implementation to BlockingConnectionPool and throw away ConnectionPool altoghether. This would minimize the changes (just replace ConnectionPool with BlockingConnectionPool), but it's a breaking change.
Notes
redis.asyncio.RedisCluster does not suffer the same problem, because it has it's own connection pool handling mechanism and already allows for retries.
We should also consider some modification of RedisAsyncResultBackend and RedisAsyncClusterResultBackend. These classes don't accept any argument to limit number of simultaneous connections.
Hi and thanks for finding it out. I guess the easiest option is to use a blocking pool without any way to change it. We can just add timeout parameter which configures when the exception is raised. By default we can set it to 1 to simulate the non-blocking pool implementation.
Hi and thanks for finding it out. I guess the easiest option is to use a blocking pool without any way to change it. We can just add timeout parameter which configures when the exception is raised. By default we can set it to 1 to simulate the non-blocking pool implementation.
Sounds good! I'm not sure whether we have to add timeout argument, since any unknown kwargs ale passed to ConnectionPool already. But if you want to be more explicit, we can do that as well.
I'll get to it and we can tune the details in PR.
|
GITHUB_ARCHIVE
|
Ever since I published the first post on this blog back in October, Lernabit has been a lot of fun. People have told me how much they like the idea and many have provided awesome feedback and suggestions.
What most people don't know is how I have been building it up to this point. The first version of Lernabit was built on a netbook. An Asus Eee PC to be exact. Any web developer would agree that such a low quality computer makes modern web development very slow, and even painful. But I simply did not have a better computer or the money to get one. What I did have (and still do) is a passion and dedication to make education better, so I pushed forward using the tools available. Some things have now changed.
About a month ago, I won a HeroX prize for presenting my ideas for ways to improve financial education. It was a relatively small prize, but it was a big moment. The goal of the contest was to uncover the problems with financial education, but the ideas I presented can also apply to education in general. Winning this contest was validation of some of the fundamental ideas behind Lernabit. In addition, there was some prize money involved, which I used to buy a fantastic new computer.
All of this means that I now have the tools and funding to build the Lernabit I have wanted to create all along. As a result, I have decided that I will be relaunching Lernabit. So allow me to explain what will be happening and what will change.
Why is the relaunch necessary?
One question I have been asked is why I would relaunch the site instead of modifying what I already have. This was a tough call to make, and the rationale is mostly due to technical considerations.
Even before winning my prize, there were some design flaws in Lernabit that were beginning to surface. I'll probably write a second blog post explaining the more technical details for anyone who is interested, but to put it simply, some of technology used in version 1 of the site does not work very well with where I want Lernabit to go. The feedback from people using the site has given me new ideas and insight into what Lernabit can become, and I began to realize that the technology that was being used wouldn't have been the best tool for the job.
After considering all of those facts, I decided it would be just as easy to rebuild the site completely rather than attempt to modify what I had. I also decided that the best thing to do from a business perspective would be to essentially take the site offline so I could focus on building the new site without worrying about maintaining the existing one.
What changes will there be?
As mentioned earlier, most of the changes will be technical improvements. But as long as I am rebuilding the site anyway, I will also use this as an opportunity to fix most of the problems people have mentioned, and a few I have noticed myself.
One improvement will be a much better login experience. The new site will remember your login status between visits rather than requiring you to login every time. Then you will only be prompted for a password when doing things like changing personal information.
You will also notice a cleaner and more modern design. Less text and more generous use of icons will make it easier to browse the page and find what you want. The new design will also make it easier to do things like interacting with the site without interrupting audio playback.
Another big change will involve the topics that are covered. Up to this point, Lernabit has been focused on science education. The new site will cover a wider range of topics.
Finally, the relaunch will include a native Android app. If my mission is to make education more accessible throughout your day, a better mobile experience is an absolute necessity. I haven't decided if the mobile app will be available immediately when the site relaunches or if it will be ready shortly afterward, but such an app is in progress. As for iOS, there is not a native iOS app planned right now, although I would like to eventually have one built. In the meantime, the new site will work better on mobile devices. While not as good as a native app, the site will still work like a charm on whatever mobile device you want to use.
Lernabit is not shutting down
I want to emphasize that Lernabit is not shutting down. In fact, it is quite the opposite. Not only is the site not shutting down, it will be coming back stronger and better than before. Going offline is just a way to make that happen faster by letting me focus on building the new site.
Also, before I wrap up this post, I want to thank everyone who has been using Lernabit. I appreciate every single person who has tried it out, and I love hearing the feedback people have been providing. I do listen to it, and it is very helpful. So thank you. With such an awesome group of people using the site, I know that the next generation of Lernabit will be a powerful force to improve education around the world.
|
OPCFW_CODE
|
What are some best practices when using jquery for Autocomplete with ashx?
After doing some research I saw some examples using jquery's .ajax function and webmethods for autocomplete with ashx in Sharepoint.
http://www.lifeonplanetgroove.com/blog/index.php/2010/10/15/adding-and-deploying-generic-handlers-ashx-to-a-sharepoint-2010-visual-studio-project/
http://encosia.com/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax
http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx
I guess my question boils down to.
any Security requirements in IIS using ashx ?
I would assume it would still be as secure as the page that it is on, right?
Is there any performance gain/loss by going the webmethod route?
Any other suggestions are more than welcome
I can't say I tried exactly what you're doing, but...
Using ASHX is fine, though you'll probably want to deploy it to the LAYOUTS directory. That's also fine, but it rules out sandbox or Office365 compatible solutions. Yes, users making requests will still have the same authentication applied, so yes, it should be as secure (so long as you don't elevate their privileges or anything)
One suggestion I might offer - have you considered using the REST API? Or Lists.asmx? I know the return format might not be so pretty, but you wouldn't have to deploy anything into LAYOUTS then.
+1 I've gone this route before and it worked great.
I have done this in the past and had to do a few things:
Cache whenever possible. This is essential if you have a high volume site or if the list of options that can be in the autocomplete is large.
If you are filling your autocomplete list from sharepoint objects, use the PortalSiteMapProvider for all queries. It is lightning fast and handles caching automatically
Insist on a minimum number of letters to be typed before autocomplete kicks in (usually 2 or 3 will work) this minimizes the amount of wasted data sent from the server
If possible, require a delay in the keystrokes entered by the user to be the 'trigger' for the autocomplete. This minimizes wasted requests but is not viable in all situations
If your autocomplete dataset is small, get the applicable results after the first 2-3 keystrokes and trim the autocomplete list via javascript rather than going back to the server for each successive letter typed.
|
STACK_EXCHANGE
|
// Generated by SharpKit.QooxDoo.Generator
using System;
using System.Collections.Generic;
using SharpKit.Html;
using SharpKit.JavaScript;
namespace qx.bom
{
/// <summary>
/// <para>Includes library functions to work with browser windows</para>
/// </summary>
[JsType(JsMode.Prototype, Name = "qx.bom.Window", OmitOptionalParameters = true, Export = false)]
public partial class Window
{
#region Methods
public Window() { throw new NotImplementedException(); }
/// <summary>
/// <para>Closes the given window</para>
/// </summary>
/// <param name="win">Native window object</param>
/// <returns>The return value (if any) of the window’s native close method</returns>
[JsMethod(Name = "close")]
public static object Close(Window win) { throw new NotImplementedException(); }
/// <summary>
/// <para>If a modal window is opened with the option</para>
/// <code>
/// useNativeModalWindow = false;
/// </code>
/// <para>an instance of qx.bom.Blocker is used to fake modality. This method
/// can be used to get a reference to the blocker to style it.</para>
/// </summary>
/// <returns>Blocker instance or null if no blocker is used</returns>
[JsMethod(Name = "getBlocker")]
public static qx.bom.Blocker GetBlocker() { throw new NotImplementedException(); }
/// <summary>
/// <para>Checks if the window is closed</para>
/// </summary>
/// <param name="win">Native window object</param>
/// <returns>Closed state</returns>
[JsMethod(Name = "isClosed")]
public static bool IsClosed(Window win) { throw new NotImplementedException(); }
/// <summary>
/// <para>Moving an opened window is not allowed in the most browsers anymore.</para>
/// </summary>
/// <param name="win">Native window object</param>
/// <param name="top">Y-coordinate</param>
/// <param name="left">X-coordinate</param>
[JsMethod(Name = "moveTo")]
public static void MoveTo(Window win, double top, double left) { throw new NotImplementedException(); }
/// <summary>
/// <para>Opens a native window with the given options.</para>
/// <para>Modal windows can have the following options:</para>
/// <list type="bullet">
/// <item>top</item>
/// </list>
/// <list type="bullet">
/// <item>left</item>
/// </list>
/// <list type="bullet">
/// <item>width</item>
/// </list>
/// <list type="bullet">
/// <item>height</item>
/// </list>
/// <list type="bullet">
/// <item>scrollbars</item>
/// </list>
/// <list type="bullet">
/// <item>resizable</item>
/// </list>
/// <para>Modeless windows have the following options:</para>
/// <list type="bullet">
/// <item>top</item>
/// </list>
/// <list type="bullet">
/// <item>left</item>
/// </list>
/// <list type="bullet">
/// <item>width</item>
/// </list>
/// <list type="bullet">
/// <item>height</item>
/// </list>
/// <list type="bullet">
/// <item>dependent</item>
/// </list>
/// <list type="bullet">
/// <item>resizable</item>
/// </list>
/// <list type="bullet">
/// <item>status</item>
/// </list>
/// <list type="bullet">
/// <item>location</item>
/// </list>
/// <list type="bullet">
/// <item>menubar</item>
/// </list>
/// <list type="bullet">
/// <item>scrollbars</item>
/// </list>
/// <list type="bullet">
/// <item>toolbar</item>
/// </list>
/// <para>Except of dimension and location options all other options are boolean
/// values.</para>
/// <para>Important infos for native modal windows</para>
/// <para>If you want to reference the opened window from within the native modal
/// window you need to use</para>
/// <code>
/// var opener = window.dialogArguments[0];
/// </code>
/// <para>since a reference to the opener is passed automatically to the modal window.</para>
/// <para>Passing window arguments</para>
/// <para>This is only working if the page of the modal window is from the same origin.
/// This is at least true for Firefox browsers.</para>
/// </summary>
/// <param name="url">URL of the window</param>
/// <param name="name">Name of the window</param>
/// <param name="options">Window options</param>
/// <param name="modal">Whether the window should be opened modal</param>
/// <param name="useNativeModalDialog">controls if modal windows are opened using the native method or a blocker should be used to fake modality. Default is true</param>
/// <param name="listener">listener function for onload event on the new window</param>
/// <param name="self">Reference to the ‘this’ variable inside the event listener. When not given, ‘this’ variable will be the new window</param>
/// <returns>native window object</returns>
[JsMethod(Name = "open")]
public static Window Open(string url, string name, object options, bool modal, bool useNativeModalDialog, Action<qx.eventx.type.Data> listener, object self) { throw new NotImplementedException(); }
/// <summary>
/// <para>Resizing an opened window is not allowed in the most browsers anymore.</para>
/// </summary>
/// <param name="win">Native window object</param>
/// <param name="width">New width</param>
/// <param name="height">New height</param>
[JsMethod(Name = "resizeTo")]
public static void ResizeTo(Window win, double width, double height) { throw new NotImplementedException(); }
#endregion Methods
}
}
|
STACK_EDU
|
# -*- coding: utf-8 -*-
"""
LineEdits
=========
"""
# %% IMPORTS
# Package imports
from qtpy import QtCore as QC, QtGui as QG, QtWidgets as QW
# GuiPy imports
from guipy import layouts as GL, widgets as GW
# All declaration
__all__ = ['DualLineEdit', 'FloatLineEdit', 'IntLineEdit']
# %% CLASS DEFINITIONS
# Make class with two line-edits
class DualLineEdit(GW.DualBaseBox):
"""
Defines the :class:`~DualLineEdit` class.
"""
# Signals
modified = QC.Signal([], [int, int], [int, float], [int, str],
[float, int], [float, float], [float, str],
[str, int], [str, float], [str, str])
# Initialize the DualLineEdit class
def __init__(self, types=(str, str), sep=None, parent=None):
"""
Initialize an instance of the :class:`~DualLineEdit` class.
Optional
--------
types : tuple of types ({int; float; str}). Default: (str, str)
A tuple containing the type of each line-edit.
sep : str or None. Default: None
The string that must be used as a separator between the two
line-edits. If *None*, no separator is used.
parent : :obj:`~PyQt5.QtWidgets.QWidget` object or None. Default: None
The parent widget to use for this dual line-edit or *None* for no
parent.
"""
# Call super constructor
super().__init__(parent)
# Create the dual line-edit
self.init(types, sep)
# This property returns the default 'modified' signal
@property
def default_modified_signal(self):
return(self.modified.__getitem__(self.types))
# This function creates the dual line-edit
def init(self, types, sep):
"""
Sets up the dual line-edit after it has been initialized.
"""
# Make dict with different line-edits
box_types = {
float: FloatLineEdit,
int: IntLineEdit,
str: GW.QLineEdit}
# Save provided types
self.types = types
# Create the box_layout
box_layout = GL.QHBoxLayout(self)
box_layout.setContentsMargins(0, 0, 0, 0)
# Create two line-edits with the provided types
# LEFT
left_box = box_types[types[0]]()
box_layout.addWidget(left_box)
self.left_box = left_box
# RIGHT
right_box = box_types[types[1]]()
box_layout.addWidget(right_box)
self.right_box = right_box
# If sep is not None, create label and add it
if sep is not None:
sep_label = GW.QLabel(sep)
sep_label.setSizePolicy(QW.QSizePolicy.Fixed, QW.QSizePolicy.Fixed)
box_layout.insertWidget(1, sep_label)
# This function is automatically called whenever 'modified' is emitted
@QC.Slot()
def modified_signal_slot(self):
# Emit modified signal with proper types
self.modified[self.types[0], self.types[1]].emit(
*DualLineEdit.get_box_value(self))
# Make class for setting a number in a line-edit
class IntLineEdit(GW.QLineEdit):
"""
Defines the :class:`~IntLineEdit` class.
This class is used for creating a lineedit object that solely accepts
integers.
"""
# Signals
modified = QC.Signal([float], [int])
# Initialize the IntLineEdit class
def __init__(self, parent=None):
"""
Initialize an instance of the :class:`~IntLineEdit` class.
Optional
--------
parent : :obj:`~PyQt5.QtWidgets.QWidget` object or None. Default: None
The parent widget to use for this number line-edit box or *None*
for no parent.
"""
# Call super constructor
super().__init__(parent)
# Create the number line-edit box
self.init()
# This property returns the default 'modified' signal
@property
def default_modified_signal(self):
return(self.modified.__getitem__(self.numtype))
# This property returns the number type of this box
@property
def numtype(self):
return(int)
# This property returns the number getter of this box
@property
def num_getter(self):
return(self.locale().toInt)
# This property returns the proper validator to use
def get_validator(self):
return(QG.QIntValidator)
# This function creates the number line-edit box
def init(self):
"""
Sets up the number line-edit box after it has been initialized.
"""
# Obtain the proper validator
validator = self.get_validator()(self)
# Set the validator
self.setValidator(validator)
# Set initial value
self.value = 0
# Override focusInEvent to format text when it is triggered
def focusInEvent(self, event):
# Obtain a normal string version of the current number
num = str(self.value)
num = num.replace('.', self.locale().decimalPoint())
# Set this as the current text
self.setText(num)
# Call and return super method
return(super().focusInEvent(event))
# Override focusOutEvent to format text when it is triggered
def focusOutEvent(self, event):
# Set current number in its formatted version
self.set_box_value(self.num_getter(self.text())[0])
# Call and return super method
return(super().focusOutEvent(event))
# This function calls the validator's setRange
def setRange(self, bottom, top):
self.validator().setRange(bottom, top)
# This function calls the validator's setBottom
def setBottom(self, bottom):
self.validator().setBottom(bottom)
# This function calls the validator's setTop
def setTop(self, top):
self.validator().setTop(top)
# This function retrieves a value of this special box
def get_box_value(self, *value_sig):
"""
Returns the current number value of this line-edit box.
Returns
-------
value : int, float
The value contained in this line-edit bbox.
"""
return(self.value)
# This function sets the value of this special box
def set_box_value(self, value, *value_sig):
"""
Sets the current number value of this line-edit box to `value`.
Parameters
----------
value : int, float
A value that must be set for this line-edit box.
"""
# Save the current value
cur_value = self.value
# Save value
self.value = value
self.setText(self.locale().toString(value))
# Emit modified signal if value was changed
if(cur_value != self.value):
self.default_modified_signal.emit(value)
# Make class for setting a number in a line-edit
class FloatLineEdit(IntLineEdit):
"""
Defines the :class:`~FloatLineEdit` class.
This class is used for creating a lineedit object that solely accepts
floats.
"""
# This property returns the number type of this box
@property
def numtype(self):
return(float)
# This property returns the number getter of this box
@property
def num_getter(self):
return(self.locale().toDouble)
# This property returns the proper validator to use
def get_validator(self):
return(QG.QDoubleValidator)
|
STACK_EDU
|
Unfortunately, much of our universe appears to be quantized. One could imagine a system wherein, instead of rolling dice, you precisely measured the net magnetic spin of some ideal gas on non-interacting magnetic moments each of either spin up (1) or spin down (-1). Theoretically such a property, when measured many times, should form a gaussian distribution (a bell curve) with an exceedingly small standard deviation. Unfortunately, this is not quite right. In actuality only certain discrete values of net spin are possible, based on the number of moments in the system. For example, if there is an odd number of moments, it is impossible for the net spin to be exactly 0. Also, obviously, it is impossible for the net spin to ever be any number that is not an integer or to be larger than the total number of particles. All of these concerns have a negligible impact on the behavior of large collections of particles, but they mean that, at a fundamental level, you don't have a continuous distribution.
The example deals with a very idealized situation, but really a great many physical quantities have been shown to be (very probably) quantized by modern physics. The question, then, requires us to find something that isn't.
Oddly, position is currently not believed to be precisely quantized, though this is disputed by some physicists. In the standard model of quantum mechanics, when one puts a single electron in the ground state in an infinite square well, a truly continuous probability distribution (the Schrodinger Equation) describes the likelihood thereafter of finding the electron to be at any given point in space within the box. Thus, with some sophisticated and expensive experimental physics equipment, one could actually maybe be measuring a continuous probability distribution. But here we encounter the second, and more fundamental issue with your plan:
The Arabic numeral system is inherently discrete. Most measuring devices give their outputs in numbers (citation needed). Lets take the number 5.5563, for example. If this is the number our instrument reports, the next possible number is 5.5564. The instrument can't report 5.55631, or any other number between the two possible results. In order to have a continuous distribution we would need an infinite degree of precision and an infinite number of digits. This is a problem.
Clearly the solution is to use a measuring device that doesn't report numbers. Unfortunately, unless it its measurement is based off of gravitational interaction, the act of measurement itself now creates problems for us. All force interactions other than gravity possess a good amount of evidence supporting the existence of a mediating particle. Such mediating particles result in quantization of the interaction, so that only interactions involving an integer number of such mediating particles (quanta) are possible. While people like to speculate about gravitons, there really isn't much evidence yet for their existence and a far bit of evidence to the contrary, so you're actually on pretty solid ground if the measurement device is solely using gravity to measure the position of the electron in the box and reporting that position by means of deflection in its own position (the uncertainty in the latter is of no concern to us; indeed it is helpful to you).
So now we have a continuous probability distribution measured and reported in a continuous manner. So we're good right? Not quite.
Gravity is very weak, and electrons are very small. Unfortunately, large (i.e. massive) objects have an uncertainty in position that is astronomically small. So yes, this works, but you can't see it, which kinda defeats the purpose of a measuring device in the first place. So, for all intents and purposes, it is not currently technologically feasible to generate a random number from a truly continuous probability distribution. In order to do so we'd need something like a macroscopic atomic (i.e. indivisible) object with the mass of an electron, or some such.
And even then quantization would interfere if you were telling where it was by looking at it, feeling it, smelling it, or otherwise measuring it in a physical way via biology.
So, you're pretty much screwed.
But wait! There's hope! The human mind is a powerful thing, and evidence suggests it can create and emulate a continuous probability distribution. You can experience the spiritual and mental directly, rather than indirectly as with the physical, and you can control directly the physical and mental as you cannot the spiritual. So then, if you were to conceptualize a continuous probability distribution and by virtue of the free will granted to you by God you selected a truly random point in the distribution you could have a number of the data type you need to make this work. Do any systems currently existing not only have you do that but also ask you to do math on it? No. Not really.
There are systems that ask you to do this, though. In Amber Diceless, the GM is supposed to (if I understand it correctly) come up what exact thing is retrieved by quick Logrus summoning via a random result with a normal distribution centered about the desired item and the standard deviation directly proportionate to the reciprocal of the time use to search for the thing. This is the only example I am aware of.
|
OPCFW_CODE
|
Bulk API
This article says "The Salesforce connector is built on top of the Salesforce REST/Bulk API." but this seems ambiguous. Is it on the Bulk API or is it on the standard REST API?
If it is the Bulk API, is it built supporting PK Chunking as per these articles?
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_bulk_query_processing.htm
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_code_curl_walkthrough_pk_chunking.htm
I would expect that it should use PK chunking as this would be more reliable with larger sets of data
If it is, is this an option that we can specify somehow or is it a feature that I should request to the product team? Thanks
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: df56c0c4-ac20-76e5-3f99-7e5ed9c2d726
Version Independent ID: 8b224227-1cf7-6fa7-4bee-24d243509e45
Content: Copy data from and to Salesforce - Azure Data Factory
Content Source: articles/data-factory/connector-salesforce.md
Service: data-factory
GitHub Login: @linda33wj
Microsoft Alias: jingwang
Thank you for your detailed feedback. We are looking into the issue, and we will respond when we have more information.
@o-o00o-o Just confirmed with ADF team, that "The Salesforce connector is built on top of the Salesforce REST/Bulk API." means the connector is built on top of both REST and Bulk API. The
connector automatically choose one for better performance. PK Chunking is used when the connector chooses to go with Bulk API.
Hope this info helps.
Great thanks for this, this is great news. This clarification should be added to the article. It would also be good to understand both how the connector decides this as well as how we can tell which one was used for each execution.
Get Outlook for Androidhttps://aka.ms/ghei36
From: KranthiPakala-MSFT<EMAIL_ADDRESS>Sent: Wednesday, April 22, 2020 6:56:06 AM
To: MicrosoftDocs/azure-docs<EMAIL_ADDRESS>Cc: Brett Gerhardi<EMAIL_ADDRESS>Mention<EMAIL_ADDRESS>Subject: Re: [MicrosoftDocs/azure-docs] Bulk API (#52945)
@o-o00o-ohttps://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fo-o00o-o&data=02|01||cbc1b4a518624ce6b4a708d7e681d99e|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637231317677666554&sdata=bbo5wIuI7QKjMYs29rpFV0qqEQkytgcL%2BCJfwN15pwQ%3D&reserved=0 Just confirmed with ADF team, that "The Salesforce connector is built on top of the Salesforce REST/Bulk API." means the connector is built on top of both REST and Bulk API. The
connector automatically choose one for better performance. PK Chunking is used when the connector chooses to go with Bulk API.
Hope this info helps.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoftDocs%2Fazure-docs%2Fissues%2F52945%23issuecomment-617567306&data=02|01||cbc1b4a518624ce6b4a708d7e681d99e|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637231317677676548&sdata=UCVE%2B2vxXdYjUolNIaSZUC8auhV014IxJ8CwZFdF5VI%3D&reserved=0, or unsubscribehttps://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADIPBZ3TGNEYPMMW4TNYD53RN2BHNANCNFSM4MNLEALQ&data=02|01||cbc1b4a518624ce6b4a708d7e681d99e|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637231317677686544&sdata=Yz3dcjP7ZC5X%2BiXrtRG%2Boz2RHwZqGS%2FV80m5TYlO4QA%3D&reserved=0.
The PR for additional clarification has been created and it will be merged soon. Thanks again for pointing this out, I am closing the issue based on this update, please let us know if there is something else that we can help you with.
|
GITHUB_ARCHIVE
|
Senior Ruby Backend Engineer
Aklamio is looking for a Senior Ruby Backend Engineer that will help us develop our products around our SaaS referral marketing solutions. As you work with our scrum team, you’ll be writing high-quality code and making sure we follow the best practices patterns. Be part of a team that will have a significant impact on our product!
Who we are
At Aklamio, we believe that customers, their networks, the power of word-of-mouth and customer insights are a company’s best assets. With our Customer Incentives Platform, we help enterprises all around the world grow by acquiring new customers, and by rewarding loyal customers and converting them into brand advocates.
Founded in 2011, Aklamio has become Europe’s most successful incentives marketing platform with offices in Berlin, London, and Madrid. Aklamio was ranked 2nd fastest growing technology startup in Germany (Financial Times, Europe’s 1000 Fastest Growing Companies) with minimal external funding.
What you’ll do
- As part of our international development team, you will further develop and improve all our products, services and sites around our SaaS referral marketing solutions
- In collaboration with the frontend team, you will implement the API’s needed for laying the foundation for great products
- Extend and maintain our current infrastructure (APIs and MVC) and ensure, it follows best practices patterns, having good performance and security measures as well as a good test coverage
- Working closely with the whole development team, you will contribute to design and extend the architecture of our system
- You work in a lean and agile manner (Scrum)
What you’ll bring with you
- You proudly have of 3+ years of proven experience in Ruby and technologies like Rails, Sinatra and Rack
- Good understanding of web technologies in general and know how to build secure and robust applications
- You know how to avoid the pitfalls of Caching and Cache invalidation
- Testing with RSpec, Capybara or similar frameworks is the way you prove that your code works as specified
- You have sound knowledge in working with production-grade databases (MySQL, Redis)
- Enjoy building reliable and performant APIs based on REST or GraphQL
- Ensure clean code style and high quality with static code analyzers like Rubocop, Brakeman or similar is obvious to you
- Previous experience with Git and version control management software like Gitlab or Github is expected
Why join us
Become part of an excellent team in one of Europe’s fastest-growing and most exciting startups. We offer many possibilities for development in an international environment with flat hierarchies and fast decision making. Innovation, creativity and ownership are our top priorities! At Aklamio we are committed to providing a mutually respectful work environment. We believe diversity, team building and inclusion among our teammates are essential to our success.
- Challenging team and career growth opportunities
- Live and work where you want- Home office is the new normal for us. Live anywhere in Germany, Spain, or the United Kingdom and work from the comfort of your home, favourite coffee shop or wherever you please. Or feel free to make use of our stylish and comfortable headquarters in Berlin- we’ll save you a seat!
- Make Berlin your home- we offer relocation support and assistance.
- Benefits that fit your life. Use our benefits marketplace to choose from shopping,entertainment, electronics, music, games, fitness, transportation, and more!
- Generous Company Pension Scheme.
- Personal and Career Development: Yearly budget for training so you can expand your skills professionally or linguistically!
- Balance: 30 days of vacation
- Belonging: enjoy monthly lunches, virtual and in-person team events. Team building is fundamental for us.
|
OPCFW_CODE
|
One of the great things about Linux is that you have a range of Desktop Environments to choose from. There is the new age Gnome 3, the unique KDE, the more traditional Xfce or the basic provided by any of the *box options and many others. Over the years I have been using Linux I’ve tried several of them including Gnome 2 and Enlightment.
Regular readers of this blog will know I am a fan of KDE. I run the latest 4.7.4 on Kororaa on a couple of machines. What many people may not know is that I also have Xfce 4.8 installed on both of those systems. These are 2 very different environments but I find both work well for me.
This is my Xfce desktop (larger size click here)and despite its performance advantages it clearly can be a good looking desktop.
So which one do I prefer? If you asked I would answer KDE without hesitation. It is the easier to configure, has more powerful options and some fancy effects. However if you asked which one I use the most I would have to say it differs from time to time but lately it has been Xfce. Why? Well it does have a performance advantage especially on my older laptop. It also feels more stable. For all KDE’s attractions it does have the occasional glitch, rarely the same one though. It locks up occasionally but there is no pattern to it so nothing I can report a bug on.
Xfce doesn’t lose much in appearance. It does have a little inconsistency on some screens. It is a little more difficult to configure, more editing of text files. E.g. there is no menu editor. But it has improved a lot over the last few versions. And it is very stable, probably an advantage of the slower update timetable.
I still use the same applications, mainly KDE, on both desktops and KDE apps run well perhaps even better than on KDE. It doesn’t affect my workflow at all as I have similar keyboard shortcuts set up.
Occasionally I try other environments but I always come back to these 2.
8 thoughts on “Choose Your Desktop”
That’s a nice XFCE desktop you have there. I just started my adventure with this desktop environment after a not so nice experience with KDE. Can you tell me what’s that widget bar you have on the right?
That’s Conky on the right. It is also Conky on the lower left with the Amarok ‘what’s playing’. If you are interested I’ll post my .conkyrc.
I am interested – it looks nice, and it’s always good to have a working example. Thanks!
I’ve added a new thread on Conky. Hope it helps. Thanks for your interest.
It is a nice picture
Thanks. It’s one of mine. It is available as a wallpaper for Kororaa, on the Kororaa forum, along with several of my photos.
It is part of a slideshow I have as my wallpaper. Not a standard feature of Xfce but it can be done.
You have done a good job there Jim, I was moving around a lot the last couple of years and bought a netbook as it was easy to cart around and did most of what I wanted, including graphics, which slowed it down a lot sometimes. I has always been a KDE fan because the sorts of reasons you list above but had to try something lighter than KDE 4 as it nearly stalled the netbook.
I tried a lot of desktops, including XFCE, and settled on Gnome and LXDE as the two favourites, until Gnome 3 came along and turned gnome into a beginners platform with bugs, and not that great for productivity, personally I don’t feel that offerings like Gnome 3 and the early KDE 4 should not be the default desktop for a distro until they are less buggy.
The previous version should be the default so that beginners have a mature and stable desktop environment, whilst having the new version in the main repos for those who want to try it and help with bug reporting, this is not a role for people who are new to Linux and nothing will drive them back to their commercial operating system quicker than the sorts of issues that the new KDE and Gnome have dished out, KDE is great now, a couple of years after its’ release, and so will Gnome be. I have had a look at the new Ubuntu desktop too, and that is a bit scary!
I still use LXDE a lot as I found it just as good as XFCE, but lighter on resources and downloads, I had to use mobile broadband for those two years and really watch the data I used and the heavier desktops can cause some large updates.
I am using a grunty desktop machine now and the netbook has been retired until the next time it is needed but I still have LXDE loaded as an option with KDE and I will not look at Gnome until it is easier to use with a mouse, sure you can alt/tab and alt/F2, etc. but a newcomer would not know that.
I’ve never used LXDE though I’ve heard a lot of good about it. Must give it try one day.
Just because it is a light desktop doesn’t mean it can only be used on a lower spec machine. I know many people run Xfce and other light environments on up to date hardware and enjoy the performance, especially if it is used for resource intensive things like video editing.
|
OPCFW_CODE
|
Microsoft Blog Images
Email Blog Author
RSS for posts
RSS for comments
Search this blog
Search all blogs
ebook deal of the week: MOS 2013 Study Guide for Microsoft Word Expert
15 hours ago
New MVA video: Windows 10: Top Features for Consumers
3 days ago
ebook deal of the week: Microsoft Exchange Server 2013 Inside Out Connectivity, Clients, & UM
7 days ago
Special offer: Go digital and save 90%
7 days ago
Save 30% on Windows 10 book & eBook pre-orders
13 days ago
A U T H O R S
Solid Quality Mentors
Paul S. Randal
Kimberly L. Tripp
Nelson Ruest & Danielle Ruest
Online Training Solutions, Inc.
Browse by Tags
Browse by Tags
Quick news: Kinect for Windows SDK 1.7 available for download
Hello. The Kinect for Windows software development kit (SDK) enables developers to use C++, C#, or Visual Basic to create applications that support gesture and voice recognition by using the Kinect for Windows sensor and a computer or embedded device. The new SDK was released today and is available here...
18 Mar 2013
New book: Programming with the Kinect for Windows Software Development Kit
We’re happy to announce that Programming with the Kinect for Windows Software Development Kit (ISBN 9780735666818) is now available for purchase! In this book, David Catuhe, a developer evangelist for Microsoft, provides valuable guidance on how to create applications with Kinect for Windows. Learn...
27 Sep 2012
RTM’d today: Programming with the Kinect for Windows Software Development Kit
We’re happy to announce that Programming with the Kinect for Windows Software Development Kit (ISBN 9780735666818) has shipped to the printer! If you are a developer who wants to learn how to add gesture and posture recognition to your applications, check out David Catuhe’s book. This guide...
11 Sep 2012
Author news: David Catuhe releases Kinect Toolbox 1.2
David Catuhe, who is writing Programming with the Kinect for Windows Software Development Kit for Microsoft Press (and which we’ll publish in September), has just released Kinect Toolbox 1.2 (1108K) via CodePlex . This is the sixth version of the toolkit David has released since July 2011. Here are more...
31 Jul 2012
From the MVPs: Canuck Kinect conundrum
Here’s the 13th post in our series of guest posts by Microsoft Most Valued Professionals (MVPs). Since the early 1990s, Microsoft has recognized technology champions around the world with the MVP Award . MVPs freely share their deep knowledge, real-world experience, and impartial and objective...
30 Jul 2012
From the MVPs: Unleash the power of Kinect and your imagination
Here’s the 12th post in our series of guest posts by Microsoft Most Valued Professionals (MVPs). Since the early 1990s, Microsoft has recognized technology champions around the world with the MVP Award . MVPs freely share their deep knowledge, real-world experience, and impartial and objective...
27 Jul 2012
New book: Start Here! Learn the Kinect API
We’re happy to announce the availability of the newest book in the Microsoft Press Start Here! series: Start Here! Learn the Kinect API , by Rob S. Miles. Microsoft’s Kinect motion-sensing device, originally intended as a game-playing peripheral for the Xbox, not only became the fastest-selling...
13 Jul 2012
From the MVPs: How Kinect changed my living room
Here’s the 11th post in our series of guest posts by Microsoft Most Valued Professionals (MVPs). Since the early 1990s, Microsoft has recognized technology champions around the world with the MVP Award . MVPs freely share their deep knowledge, real-world experience, and impartial and objective feedback...
12 Jul 2012
Quick news: Apply for the Kinect Accelerator program
Check this out: If you are a developer or existing team/startup focused on building a business that takes advantage of the Kinect and Natural User Interface technologies, then the Kinect Accelerator is where you need to be. Through this program, Microsoft is supporting entrepreneurs, engineers and innovators...
21 Nov 2011
Author news: A call for Kinect authors
Greetings! We're looking for authors who are interested in putting together a book to educate developers on both the range of current Kinect development projects and to provide them with instructions and code on how to build such projects. If you have experience with the Microsoft Kinect API and are...
12 Aug 2011
Coming soon: Kinect for Windows SDK beta
Greetings! In case you missed it, in a press release from MIX11 yesterday , the following news was shared: Today at MIX, Microsoft detailed some of the features in the Kinect for Windows Beta SDK from Microsoft Research coming in the spring, including the following: Robust Skeletal Tracking...
14 Apr 2011
Page 1 of 1 (11 items)
© 2015 Microsoft Corporation.
Privacy & Cookies
|
OPCFW_CODE
|
Chapter 21. Desktop and graphics
21.1. GNOME Shell is the default desktop environment
RHEL 8 is distributed with GNOME Shell as the default desktop environment.
All packages related to KDE Plasma Workspaces (KDE) have been removed, and it is no longer possible to use KDE as an alternative to the default GNOME desktop environment.
Red Hat does not support migration from RHEL 7 with KDE to RHEL 8 GNOME. Users of RHEL 7 with KDE are recommended to back up their data and install RHEL 8 with GNOME Shell.
21.2. Notable changes in GNOME Shell
RHEL 8 is distributed with GNOME Shell, version 3.28.
- Highlights enhancements related to GNOME Shell, version 3.28.
- Informs about the change in default combination of GNOME Shell environment and display protocol.
- Explains how to access features that are not available by default.
- Explains changes in GNOME tools for software management.
21.2.1. GNOME Shell, version 3.28 in RHEL 8
GNOME Shell, version 3.28 is available in RHEL 8. Notable enhancements include:
- New GNOME Boxes features
- New on-screen keyboard
- Extended devices support, most significantly integration for the Thunderbolt 3 interface
- Improvements for GNOME Software, dconf-editor and GNOME Terminal
21.2.2. GNOME Shell environments
GNOME 3 provides two essential environments:
- GNOME Standard
- GNOME Classic
Both environments can use two different protocols to build a graphical user interface:
- The X11 protocol, which uses X.Org as the display server.
The Wayland protocol, which uses GNOME Shell as the Wayland compositor and display server.
This solution of display server is further referred as GNOME Shell on Wayland.
The default combination in RHEL 8 is GNOME Standard environment using GNOME Shell on Wayland as the display server.
However, you may want to switch to another combination of GNOME Shell environment and graphics protocol stack. For more information, see Section 21.3, “Selecting GNOME environment and display protocol”.
- For more information about basics of using both GNOME Shell environments, see Overview of GNOME environments.
21.2.3. Desktop icons
In RHEL 8, the Desktop icons functionality is no longer provided by the Nautilus file manager, but by the desktop icons gnome-shell extension.
To be able to use the extension, you must install the
gnome-shell-extension-desktop-icons package available in the Appstream repository.
- For more information about Desktop icons in RHEL 8, see Managing desktop icons.
21.2.4. Fractional scaling
On a GNOME Shell on Wayland session, the fractional scaling feature is available. The feature makes it possible to scale the GUI by fractions, which improves the appearance of scaled GUI on certain displays.
Note that the feature is currently considered experimental and is, therefore, disabled by default.
To enable fractional scaling, run the following command:
# gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
21.2.5. GNOME Software for package management
gnome-packagekit package that provided a collection of tools for package management in graphical environment on RHEL 7 is no longer available.
On RHEL 8, similar functionality is provided by the GNOME Software utility, which enables you to install and update applications and gnome-shell extensions. GNOME Software is distributed in the
- For more information for installing applications with GNOME software, see Installing applications in GNOME.
21.2.6. Opening graphical applications with sudo
When attempting to open a graphical application in a terminal using the
sudo command, you must do the following:
If the application uses the
X11 display protocol, add the local user
root in the X server access control list. As a result,
root is allowed to connect to
Xwayland, which translates the
X11 protocol into the
Wayland protocol and reversely.
Example 21.1. Adding
root to the X server access control list to open xclock with sudo
$ xhost +si:localuser:root
$ sudo xclock
If the application is
Wayland native, include the
Example 21.2. Opening GNOME Calculator with sudo
$ sudo -E gnome-calculator
Otherwise, if you type just
sudo and the name of the application, the operation of opening the application fails with the following error message:
No protocol specified Unable to init server: could not connect: connection refused # Failed to parse arguments: Cannot open display
21.3. Selecting GNOME environment and display protocol
For switching between various combinations of GNOME environment and graphics protocol stacks, use the following procedure.
From the login screen (GDM), click the gear button next to the Sign In button.Note
You cannot access this option from the lock screen. The login screen appears when you first start RHEL 8 or when you log out of your current session.
From the drop-down menu that appears, select the option that you prefer.Note
Note that in the menu that appears on the login screen, the X.Org display server is marked as X11 display server.
The change of GNOME environment and graphics protocol stack resulting from the above procedure is persistent across user logouts, and also when powering off or rebooting the computer.
|
OPCFW_CODE
|
In this comic, Rubaiat gets into his motivation and process for creating such tools and offers inspiration for all researchers, UX designers and software developers. Beyond that, it's a great read and you can have a taste of it below!
The CHI conference showcases the very best advances in computer science, cognitive psychology, design, social science, human factors, artificial intelligence, graphics, visualization, multi-media design and more is approaching with Autodesk participating both as a proud sponsor and presenter. The theme for CHI 2015 is "Crossings": crossing borders, crossing boundaries, crossing disciplines, crossing people and technology, crossing past and future, crossing physical and digital, crossing art and science, … crossing you and me.
This year Autodesk Research has three papers receiving Honorable Mentions (the top 5% of all submissions):
Fraser Anderson, Tovi Grossman, Daniel Wigdor (Department of Computer Science, University of Toronto) and George Fitzmaurice look at ways to conceal your usage of mobile devices and stay connected without offending your co-workers.
There has been a longstanding concern within HCI that even though we are accumulating great innovations in the field, we rarely see these innovations develop into products. Our panel brings together HCI researchers from academia and industry who have been directly involved in technology transfer of one or more HCI innovations. They will share their experiences around what it takes to transition an HCI innovation from the lab to the market, including issues around time commitment, funding, resources, and business expertise. More importantly, our panelists will discuss and debate the tensions that we (researchers) face in choosing design and evaluation methods that help us make an HCI research contribution versus what actually matters when we go to market.
Parmit K Chilana, Management Sciences, University of Waterloo, Waterloo, Canada
Mary P Czerwinski, Microsoft Research, Redmond, United States
Tovi Grossman, Autodesk Research, Toronto, Canada
Chris Harrison, Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, United States
Ranjitha Kumar, Computer Science, University of Illinois at Urbana-Champaign, Champaign, United States
Patrick Baudisch, Hasso Plattner Institute, Potsdam, Germany
Shumin Zhai, Research @ Google, Mountain View, United States
Digital 210 King is part of Project Dasher, a research project designed to study buildings as living organisms. The Autodesk Toronto office was laser scanned, creating a point cloud that could be used to create a Building Information Model (BIM).
To support the community in studying how buildings operate, the Autodesk Research team has provide the dataset for the building. Kai Kostack has worked with the point cloud data to create a very artistic look at the building which you can watch below.
Multi-touch tabletop computers are useful tools and the User Interface group at Autodesk Research has explored ideas on ways to make them even better with a system called Medusa. Imagine a world where the tabletop can recognize multiple users, differentiate between right and left hands and support non-touch, virtual reality type gestures like in the sc-fi movie Minority Report.
It all starts by hacking a Microsoft Surface with 138 proximity sensors and Phidget Interface Kits. These sensors extend the touch capabilities of the computer surface to determine user proximity and the location of their hands. The sensors ane not only inexpensive but they remove complications like setting up cameras or requiring users to wear gloves or tracking markers. In a future incarnation, these sensors could be built into the table for a better aesthetic and to prevent the users from needing to worry about them.
Medusa'a sensors are arranged in three rings. An outward-facing ring of 34 sensors is mounted beneath the lip. Two upward facing rings atop the table are made-up of 46 sensors on the outer ring and 58 sensors on the inner ring.
All of this adds up to allow Medusa to support the following user interactions:
User Position Tracking
Independent Left and Right Hand Tracking
Hand Gestures (Pre-Touch Functionality)
Touch + Depth Gestures
This was tested with a prototype UI creation application called Proxi-Sketch. Proxi-sketch allows users to collaboratively develop new graphical user interfaces. You can see it all in action in the following video. If you want to know more about building the system or how parts of it worked, please refer to the Medusa publication.
|
OPCFW_CODE
|
More precision of my philosophy about safety-critical systems and C++ a
From Amine Moulay Ramdane@21:1/5 to All on Thu Nov 11 13:00:39 2021
More precision of my philosophy about safety-critical systems and C++ and Rust programming languages..
I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..
Here is more proof that Rust has to be used in safety-critical systems:
"Safe Rust guarantees an absence of data races, which are defined as: two or more threads concurrently accessing a location of memory."
So i think that detecting races with the other ways than the one of Rust are NP-hard and they do report false alarm in that many of reporting race conditions are not real ones, so C++ has not to be
used in safety-critical systems.
And read my previous thoughts:
I think i am smart, and i will say that C++ has not to be used
in safety-critical systems, and we have to use Rust in safety-critical
systems because it is suited for safety-critical systems and
safety-critical systems are those systems whose failure could result in loss of life, significant property damage, or damage to the environment, here is why:
Difficulties in race Detection
"When using multiple semaphores in static race detection is NP-hard
, which means it is hard to find an efficient solution. If the synchronization mechanism is weaker than semaphores, an exact
and efficient algorithm does exist . Otherwise, only heuristic
algorithms are available [3, 5]. Because heuristic algorithms will
only report potential race conditions, which means there may be
false alarm in that many of reporting race conditions are not real
ones. However, since detecting race condition is NP-hard
problem, one will never know which of them are real race
conditions. And, this is the reason why it is difficult to use a tool
to find race conditions accurately."
Read more here on the following paper so that to notice it:
- C++ is a highly complex language. Learning and fully understanding C++
requires a huge learning effort. C++ code does not always do what one
would ‘intuitively’ expect from looking at the source code. At the
same time, the high complexity of C++ increases the probability of
compiler errors. C++ compiler writers are only humans, after all.
• While being a strongly-typed language, C++ leaves too many holes to
circumvent the type system, deliberately or unintentionally.
More of my philosophy about C++ and Rust and Microsoft and safety-critical systems..
I invite you to read the following from Microsoft about Rust programming language:
Microsoft: Rust Is the Industry’s ‘Best Chance’ at Safe Systems Programming
I think that the above article is not correct, since i think that
Rust is suited for safety-critical systems, so i think Rust is better
than C++ in the safety-critical systems, but i think that C++ will
still be useful with the Address sanitizer and ThreadSanatizer,
and read my below thoughts since i have just added something in them:
More of my philosophy about memory safety and inheritance in programming languages..
"Address sanitization is not a security feature, nor does it provide memory-safety: it's a debugging tool. Programmers already have tools to detect that the code they've written has memory problems, such as use-after-free or memory leaks. Valgrind is
probably the best-known example. This gcc feature provides (some of) the same functionality: the only new thing is that it's integrated with the compiler, so it's easier to use.
You wouldn't have this feature turned on in production: it's for debugging only. You compile your tests with this flag, and automatically they detect memory errors that are triggered by the test. If your tests aren't sufficient to trigger the problem,
then you still have the problem, and it'll still cause the same security flaws in production.
Rust's ownership model prevents these defects by making programs that contain such defects invalid: the compiler will not compile them. You don't have to worry about your tests not triggering the problem, because if the code compiles, there cannot be a
The two features are for different sets of problems. One feature of address sanitization is to detect memory leaks (allocating memory and neglecting to free it later). Rust makes it harder to write memory leaks than in C or C++, but it's still possible (
if you have circular references). Rust's ownership model prevents data races in sequential and multi-threaded situations (see below). Address sanitization doesn't aim to detect either of those cases. But you can use ThreadSanatizer"
And using just plain C#, it has better memory protection, since the GC and runtime make it impossible to leak, double-free, or access out-of-bounds. C# has unsafe blocks just like Rust does. Safe Rust is just as safe from memory safety problems as safe C#
I think that a programming language has to provide "inheritance",
and the new Rust programming language doesn't provide it and i think that it is a deficiency in Rust, here is why:
As a software developer you have to become more efficient and productive. So you need to make sure the code you write is easily reusable and maintainable. And, among other things, this is what inheritance gives you - the ability to reuse without
reinventing the wheel, as well as the ability to easily maintain your base object without having to perform maintenance on all similar objects.
|
OPCFW_CODE
|
|Setup time||2 minutes|
|Playing time||30-90 minutes|
Forchess is a four-player chess variant developed by T. K. Rogers, an American engineer. It uses one standard chessboard and two sets of standard pieces.
Forchess was developed around the year 1975. Its inventor T. K. Rogers wanted to create a pure strategy game with the social dynamic of card games like Bridge. Rogers believed in the educational merits of chess and felt that making the game a more popular social activity would benefit society.
Rogers wanted the game to use only standard pieces and a standard board so that everything necessary to play would be readily available. He also did not want to severely limit the number of pieces each player had.
In 1992, Rogers published the instruction set as a 64-page booklet Forchess: The Ultimate Social Game, designed to fit in a shirt pocket. The booklet also contained strategies for playing the game and a new technique invented by Rogers for analyzing both chess and Forchess games. He called it influence indicator.
In 1996, Rogers posted a free instruction set on the then newly founded Intuitor website. He simultaneously began distributing thousands of free instruction brochures to schools and colleges.
The game is played by four people in teams of two. At the outset, each player controls an entire quadrant of the board with a full set of chess pieces (minus one pawn). Partners occupy quadrants diagonally across from each other. The diagram at right shows the initial layout of the Forchess board (K=King, Q=Queen, R=Rook, B=Bishop, N=Knight, and P=Pawn). Four squares are initially unoccupied.
All the pieces move and capture in the same manner as conventional chess, except the pawn, which moves diagonally and captures laterally. A pawn may not move two squares at a time, and there is no en passant capture. There are no checkmates and no stalemates: kings are captured like all other pieces. When a player is in check and has no legal moves to escape check, he may make a "token move" every turn until his king is actually captured. When a player loses his king, his remaining pieces subsequently become the captor's. The game ends when one team has lost both kings or chooses to concede.
Partners typically coordinate their moves as part of a single strategy. Thus, communication of that strategy becomes a requirement of the game. Clandestine forms of communication such as code words, furtive gestures, or secret notes are not allowed, except in special variants. All strategizing between partners must be done openly in front of their opponents. This rule lends Forchess much of its social character.
Forchess has a variant called Cutthroat, in which there are no partners and only one player wins by defeating all three opponents. Successful strategy in Cutthroat Forchess can differ greatly from "regular" Forchess, as fluid alliances may spark a game of psychological manipulation. In this respect, Cutthroat shares strategy elements with the board game Risk.
|
OPCFW_CODE
|
Simultaneous 2D and 3D editing of small molecules.
Ability to rotate and translate molecules independently in the builder.
Simplified authoring of extensions, and inclusion of a number of new example extension scripts.
Ability to add annotations to existing objects, and have those annotations appear in the List Window.
Adjustment of torsion angles within rings to change ring puckers and interconvert axial and equatorial substitutions.
Option to use the MMFF94s force field when building molecules.
Improved set of fragment templates in the sketcher
Major Bug Fixes¶
Fixed a crash when saving state files after loading a large number of proteins.
Fixed a crash when editing cells in a spreadsheet.
Centering on selected atoms would sometimes require a change in visibility before taking effect. This has been fixed.
Fixed a bug which caused rows to sometimes disappear when cells in a spreadsheet were edited.
Spreadsheet depictions are now being updated as molecules are edited.
An issue which caused the view to occasionally jump unexpectedly during molecule rotations was fixed.
Fixed a problem which sometimes caused crashes while creating measurements in the “All” scope.
Changing colors/styles of grids now operates on the proper scope, rather than on all visible grids.
Addressed issues with the builder and sketcher which sometimes caused fragments to be joined incorrectly.
Fixed a bug which sometimes caused right-click menus to become inoperative.
Minor Bug Fixes¶
Hotkeys for Marked/Locked/Visible now work on selected rows in the spreadsheet, not just the active molecule.
Fixed a bug where changes to the spreadsheet cell formats weren’t being propagated immediately to the spreadsheet.
Fixed a bug to have VIDA remember the hidden/visible state of depictions in a spreadsheet from one session to another.
Fixed an issue with the spreadsheet so that users can’t edit non-editable cells, such as computed values (e.g., molecular weight, number of rotors) and user-defined python expressions.
Fixed a sizing problem with the 2D Sketcher window, which had caused some tools in its toolbar to be hidden.
When a spreadsheet is created from a list, it is now made the current spreadsheet.
Fixed an issue which would cause blank entries to sometimes appear in the List Window.
Fixed a rendering problem which caused circles to be displayed incorrectly.
Corrected an issue in which text would be incorrectly wrapped within labels.
Display of symmetry-related molecules now updates automatically as the view is translated.
Surface created in the FRED View are now added to the List Window and can be deleted individually.
Grid contours are now contoured by default at the original resolution of the grid file.
Contours of a grid can now be temporarily shown, similarly to conformers, by clicking on their names in the List Window.
All child objects of grids now show in the List Window. Previously only the contours would show.
Corrected an issue which sometimes caused incorrect hydrogen geometries during building.
Addressed several issues affecting performance and behavior of spreadsheets.
Fixed an issue that could sometimes cause incorrect stereochemistries to appear in 2D depictions.
Fixed a bug which caused stereochemistry to be omitted from molecules when copying to the clipboard.
Corrected strange lasso behavior in the sketcher.
Support for Ubuntu 10 (x64) and RedHat Enterprise v6 (x64) was added. RedHat Enterprise v3 is no longer supported.
|
OPCFW_CODE
|
Zhou Wen found it odd. Could it be that Truth Listener's Life Soul is a passive Life Soul that can't be summoned just like my Slaughterer?
Truth Listener couldn't speak and Zhou Wen had no way of asking it. All he could do was take it in-game to fight monsters, hoping to see how much it had improved.
When he reached the poison bat cave, he summoned Truth Listener and let it kill the poison bats itself.
Truth Listener was indeed a Mythical pet. Although it was still developing, it possessed a strange strength that was astonishing. Its movement speed was as fast as lightning, and it was as fast as Zhou Wen's Ghost Steps. Its two claws were indomitable, easily tearing apart a poison bat that was similarly at the Epic stage.
Even the poison bat Boss, White Shadow of Poison, was easily caught up by Truth Listener before tearing its body apart. What surprised Zhou Wen was that when the White Shadow of Poison entered its shadow state, it failed to dodge Truth Listener's claws and was directly torn into two.
That petite body jumped between the horde of poisonous bats. Even though it could only kill one poisonous bat every time, the speed at which it killed them was very fast. Each swipe of the claw killed one, almost as though it was not afraid of a group brawl.
Although Truth Listener performed very well, these seemed to be skills it originally had. The reason why it had powerful Speed and Strength was that it was augmented by the Nine Extremes Primordial Energy Skill. This Primordial Energy Skill allowed Truth Listener's Speed and Strength to exceed its limits, allowing it to produce strength that exceeded its own.
The Indestructible Golden Body made Truth Listener hardly need to dodge attacks of the same level. It wouldn't even be easy for dimensional creatures of a higher level to kill it.
Truth Listener could really be used as a tanking pet despite it's extremely small body.
Zhou Wen had seen these powers before, but he didn't see the effects of its Evil Nullification Life Soul.
Seeing that the experiment was useless, Zhou Wen turned Truth Listener into an earring and went to the underground sea.
Before Zhou Wen descended to the sea, he summoned the goldfish to give it a try. The moment it landed in the seawater, it immediately vaporized immense amounts of water, producing white steam. It was like placing red-hot metal pieces into water.
Then, Zhou Wen hurriedly checked the goldfish's stats and saw his Bad Luck instantly soar to +100. It frightened him so much that he hurriedly unsummoned it.
You're clearly a fish, alright? How can you not like water? This is your fault, Zhou Wen lampooned inwardly before he rushed into the sea.
After switching back to the Slaughterer Life Soul, Zhou Wen used his movement technique to circle around the nine black dragons. He was ultimately no match for it and was swallowed by a black dragon.
Apart from the enhancement in my hearing range and the details I can make out, it doesn't seem to be of any special use. Back in Dragon Gate Grotto, it was able to turn the lightning into Primordial Energy. Why isn't it capable of doing so now? The black dragon spewed out a few rounds of dragon breath at me, but it failed to absorb the dragon breath and convert it to Primordial Energy when it landed on me. Could it be because the black dragon is too powerful that it isn't able to convert it in time? Zhou Wen thought to himself as he headed to Ancient Sovereign City. He wanted to see if it could absorb the flames at the Fire God Platform.
However, when the firebirds hit the blood-colored avatar, Truth Listener didn't react at all and didn't absorb the flames.
Could it be that it can only absorb the power of lightning? However, dimensional creatures with lightning-elemental attributes seem a little difficult to find, Zhou Wen thought for a moment and immediately thought of Li Xuan.
Li Xuan had the Thundergod Sword. He just needed to try it out with him.
He gave Li Xuan a call and asked if he was free. Since Li Xuan was free, the two of them decided to meet at the training grounds.
"Zhou Wen, look who this is?" When Li Xuan came, he had actually brought someone over.
"Ah Lai, why are you here?" Zhou Wen was somewhat surprised. This was because Ah Lai wasn't a student at the college. Logically speaking, he shouldn't have been able to enter the school.
Li Xuan said smugly, "I got him an identification card and got him to be admitted into an ordinary university. Then, I used some connections to get him to come to our Sunset College as an exchange student."
"Awesome!" Zhou Wen gave him a thumbs up. This wasn't something an ordinary person could do.
Ah Lai was still very quiet. He didn't speak much. Back when Zhou Wen and Li Xuan had taken him out of the Holy Land, his memory had been damaged greatly, so his memories were limited.
Later, Ah Lai was taken back by Li Xuan and was arranged that he would stay in Li Xuan's villa. This was the first time Zhou Wen had seen him since their farewell.
"Ah Lai is really strong. I got him to cultivate in a Primordial Energy Art and gave him some crystals. It didn't take long for him to advance to the Legendary stage. He's an absolute genius. Who knows, he might be able to advance to the Epic stage in a short while," Li Xuan said excitedly.
In a certain sense, Ah Lai's Primordial Energy Art and Primordial Energy Skills were taught by Li Xuan. It was enough to view Ah Lai as his student, so Li Xuan was very happy with Ah Lai's achievements.
"If you need money, go to Li Xuan. If you want to beat up monsters or something, you can look for me," Zhou Wen said to Ah Lai with a smile.
Li Xuan rolled his eyes at Zhou Wen speechlessly. "By the way, why were you looking for me?"
"I want you to help me with a test. Use your Thundergod Sword to smite me with lightning," Zhou Wen said.
"What's the point of me smiting you? Even the void lightning in Dragon Gate Grotto wasn't able to harm you at all. I doubt my lightning will do anything." Li Xuan curled his lips and said, "Don't tell me you are deliberately showing off?"
"Just smite me. Why all the chatter?" Zhou Wen said.
"On your guard." Li Xuan suddenly unsheathed his sword and cleaved using the Thundergod Sword, slashing out a powerful lightning beam at Zhou Wen.
Zhou Wen had no intention of being hit. All he did was hold his Overlord Sword horizontally and block the lightning sword beam.
The lightning exploded on his sword, spreading lightning across Zhou Wen that his hair stood on end. Thanks to the electric current, his body convulsed for quite some time before he stopped.
Holy sh*t, it's useless! Zhou Wen was depressed. Why did it work previously, but not now?
"Old Zhou, what's wrong with you? Why didn't you use the method you used at Dragon Gate Grotto to block my lightning?" Li Xuan felt puzzled as he looked at Zhou Wen, unsure of what he was up to.
"Again," Zhou Wen said through gritted teeth.
"Old Zhou, when did you gain the hobby of being tortured?" Li Xuan said with a smile. However, he didn't idle as he struck out with another lightning sword beam.
Zhou Wen raised his sword to block the lightning sword beam, but it was the same as the previous strike. His entire body turned numb from the lightning, and his hair resembled afro curls.
"Haha, Zhou Wen, do you really have sadomasochistic tendencies? Do you want me to strike you a few more times?" Li Xuan was in a good mood. He was eager to continue, as though he wanted to cleave out a few more times.
"Smite your a*s." Zhou Wen was now officially certain that Truth Listener's Evil Nullification Life Soul didn't have the ability to convert lightning.
|
OPCFW_CODE
|
Starting July 30th, you will no longer be able to log in to a Curse account that was not merged with a Twitch account. If you have not yet merged your Curse account with a Twitch account, please do so here! Otherwise, your account and its content will be inaccessible.
I read an article on the internet trying to understand how Minecraft chooses suitable coordinates to act as the world spawn. I learned some information about how that mechanic works including spawning within a certain area around this position.
It was implied that the game starts from the x/z 0,0 coordinates then explores around that position for a suitable area to start. I didn't see much in the way of detail about how it considers an area suitable. It doesn't seem like it is totally consistent either as I read that using the same seed, a world may be the same but the initial spawn location may still be different.
It would make sense for the game to not dump you in the middle of water / the ocean. Even better to put you somewhere on land with trees within a reasonable distance.
The world I am playing the world origin 0,0 is in the ocean and the spawn location is near the N/S zero line for the z axis but about 700 metres down the x axis. This is fair enough but there is land in a forest biome less than 200 blocks north (z < -200) which would seem a very suitable place to spawn but the game chose a location over twice as far away. It could make sense if it was a very simplistic algorithm that just moved down the positive x axis until it hit land. What I read implied it could be more complex.
I also read that in spite of trying to find a suitable location the game can sometimes dump you in the ocean or even on lava. I could experiment...for science....but wondered if anyone knows any more. It isn't something important for my playing enjoyment but my curiosity was piqued.
I once had a spawn in the middle of an ocean. It was on xbox 360 and it was an all water map with just a couple of small islands. One island had one tree on it. The island could not be seen from spawn and i only made it there without starving by pure luck. It was a great survival seed, very challenging.
I don't know about non-Java versions but the Java version first checks for a plains, forest, taiga (non-snowy) or jungle biome (including their hills variants, but not other variants) within a 512x512 area centered around 0,0, then chooses an initial point if found, then it checks to see if the block at that point is a grass block at sea level or higher and if so sets world spawn there (the actual area checked uses the biome generator's underlying resolution of 4 blocks, prior to smoothing, so coordinates are a multiple of 4. For example, the seed "-123775873255737467" in 1.6.4 has spawn at -92, 236, which is a multiple of 4 so this seed passes the first check).
If no grass block is found it starts a routine which checks up to 1000 points with a random offset of up to +/- 63 blocks from the previous point, which will randomly wander around and may get over 1000 blocks from the initially chosen location before it either gives up or finds a valid spot (in the case that it gives up spawn can be in a location like in the ocean with no land around, though this is rare in versions since 1.7). Unlike the biome check, this method actually generates chunks, which is why you sometimes see a long trail of chunks scattered from 0,0 to the world spawn, as in this example (in this case the game is unable to read blocks below normal sea level so it fails to find any grass blocks on default Superflat worlds):
Here is the actual code that finds the world spawn point (the comment given by MCP is a bit incorrect since it will only be within 256 blocks of 0,0 if the biome check passes and there is exposed land):
* creates a spawn position at random within 256 blocks of 0,0
protected void createSpawnPosition(WorldSettings par1WorldSettings)
this.worldInfo.setSpawnPosition(0, this.provider.getAverageGroundLevel(), 0);
this.findingSpawnPoint = true;
WorldChunkManager var2 = this.provider.worldChunkMgr;
List var3 = var2.getBiomesToSpawnIn();
Random var4 = new Random(this.getSeed());
ChunkPosition var5 = var2.findBiomePosition(0, 0, 256, var3, var4);
int var6 = 0;
int var7 = this.provider.getAverageGroundLevel();
int var8 = 0;
if (var5 != null)
var6 = var5.x;
var8 = var5.z;
this.getWorldLogAgent().logWarning("Unable to find spawn biome");
int var9 = 0;
while (!this.provider.canCoordinateBeSpawn(var6, var8))
var6 += var4.nextInt(64) - var4.nextInt(64);
var8 += var4.nextInt(64) - var4.nextInt(64);
if (var9 == 1000)
this.worldInfo.setSpawnPosition(var6, var7, var8);
this.findingSpawnPoint = false;
Also, the biome check is done by scanning the 512x512 biome map from the northwest corner to the southeast corner (from west to east, then back to the west and one south to the next row), which causes spawn to be biased to the south (it does not necessarily use the first valid point found but rather has a random chance of choosing a valid point, which starts at 100% and decreases as more points are found, and always checks the entire area).
The Meaning of Life, the Universe, and Everything.
Our 0,0 is surrounded by a ton of ocean but there is a huge desert biome just to the west. Our spawn point is 3788, 30. There are larger islands much closer to the spawn point but maybe this little land mass had all the other criteria.
|
OPCFW_CODE
|
Migrate the legacy examples to the Merlin repo
We may (or may not) want to keep these examples but they've overstayed their welcome in the NVTabular repo, which is burdened with the accumulation of a lot of historical cruft. Since some of these examples use inference code that's moving to Systems, it makes more sense for them to live in the Merlin repo (if we want to keep them.)
The PR on the other side of this migration is https://github.com/NVIDIA-Merlin/Merlin/pull/742
Hmm, haven't been able to reproduce these test failures locally, so...
rerun tests
rerun tests
Thanks @karlhigley - that is a great clean up PR.
We need to update multiple documentation before we can merge it.
https://github.com/NVIDIA-Merlin/NVTabular/blob/main/examples/README.md
https://github.com/NVIDIA-Merlin/NVTabular/blob/main/README.md
Let me try to make a proposal to update the READMEs.
@mikemckiernan can you test, if we break any links in our documentations?
I added Karl's fork and checked out the migrate/legacy-examples branch. I already have a handful of changes to make the TOC behave reasonably and I still have some broken links to fix.
I'll fix the remaining broken links and then push (to Karl's fork?) on Monday unless someone tells me otherwise.
@mikemckiernan If you can't push to my fork, I can push to a branch on the main repo that we can share or we can sync on the changes and I can make them on my branch.
Not trying to dump a bunch of new work on you here; trying to find a way to either prompt the right folks to either make some concrete decisions about these examples or move them out of my way. Maintaining and working around these outdated examples isn't free, but we often treat it like it is because that's the status quo. These changes are attempt to remove the status quo as an option from the decision making process—it's fine if we're willing to make the updates to keep them around, but if not, that's fine too.
In either case, I'm happy to make the necessary changes, but I don't always know where to find them. Point me in the right direction?
Let me know, if I can support fixing the links
As far as I can tell, these are the places that explicitly link to that path on Github:
This only includes absolute URLs; there could be others using relative paths but I haven't figured out how to find them yet.
@bschifferer Updating the text of the READMEs is a bit outside my wheelhouse, so I'd propose we go ahead and merge the migration PRs and then update the READMEs. We can't really do that work in advance of the PRs being merged, so that's kind of the only way to split it. As far as broken links go, they appear to be largely in the README and the examples themselves, so that might be reasonable thing to do as part of the second set of PRs.
@mikemckiernan If you can't push to my fork, I can push to a branch on the main repo that we can share or we can sync on the changes and I can make them on my branch.
Not trying to dump a bunch of new work on you here; trying to find a way to either prompt the right folks to either make some concrete decisions about these examples or move them out of my way. Maintaining and working around these outdated examples isn't free, but we often treat it like it is because that's the status quo. These changes are attempt to remove the status quo as an option from the decision making process—it's fine if we're willing to make the updates to keep them around, but if not, that's fine too.
In either case, I'm happy to make the necessary changes, but I don't always know where to find them. Point me in the right direction?
A) I don't see this as a bunch of new work. It's tech debt and I'm glad to play any role in the fixup.
B) I never shared how I check for broken links, so I can't blame anyone for not doing it. I'll share shortly and either we can all follow along or we'll discover a better way to handle them.
@karlhigley @mikemckiernan can we merge the PR?
rerun tests
|
GITHUB_ARCHIVE
|
Change focus to Help window after Ctrl+Shift+H
Problem Description
After making Help window visible with Ctrl+Shift+H, the focus is still in the console. Pressing Ctrl+Shift+Alt+M maximizes the console instead of the Help window. It would be better if focus was on the Help window after Ctrl+Shift+H so that users can maximize with Ctrl+Shift+Alt+M without having to use the mouse/touchpad to change focus to the Help window
What steps reproduce the problem?
Ctrl+Shift+H to make Help window visible
Ctrl+Shift+Alt+M to maximize. The console maximizes instead of the Help window
What is the expected output? What do you see instead?
I think that's the current behavior indeed (I was able to reproduce it :+1:).
I think there are some plugins that have the behavior you expect from the shortcut (to give focus) like the Editor, the Console and the Find pane. Maybe we should revisit which plugins get focus with their shortcut @ccordoba12 ? Maybe there is a reason for the current? At least in case of the Help pane, it kind of makes sense to let it have the focus over the combobox/line edit no? 🤔
What do you think @spyder-ide/core-developers ?
I think, in general, window pane shortcuts should:
open the pane if not already open and give focus to the pane.
give focus to the pane if already open, not close them. For example the Outline and Project panes open if not already open, but close rather than give focus if they are already open.
These are just my opinion.
Maybe we should revisit which plugins get focus with their shortcut @ccordoba12 ?
The problem is that some people could find useful (like @NotNormallyAGitUser) to give focus to panes that can have it (like Help), but others don't. In my case, I always ask for help from the code I'm writing in the Editor or IPython console. So, I'd find inconvenient to give focus to Help after that because it'd break my workflow.
A flexible solution that would acomodate both use cases would be to add an entry to the Options menu to turn on/off focus for plugins like Help.
@ccordoba12 : Thanks for clarifying the fact that it's not so simple. In addition to a GUI widget to enable/disable automatic shifting of focus, it can also be a parameter in the startup config file.
Yep, that menu entry would be linked to an option in our config system so that it's preserved after restarts.
In my case, I always ask for help from the code I'm writing in the Editor or IPython console. So, I'd find inconvenient to give focus to Help after that because it'd break my workflow.
I think you are correct here. However, the Help pane has two shortcuts associated doesn't it? On my mac, cmd+i shows the help pane without focus but updates to the requested context, while cmd+shift+h toggles the pane open/closed.
A flexible solution that would acomodate both use cases would be to add an entry to the Options menu to turn on/off focus for plugins like Help.
But I agree with this. The user could decide the behavior: make visible with focus; make visible without focus; toggle the pane open/closed.
In my case I find that panes will toggle closed when I just want them to be visible, e.g. in a tab group, requiring me to execute the shortcut twice (close pane, open pane with visibility in tab group but no cursor focus).
On my mac, cmd+i shows the help pane without focus but updates to the requested context, while cmd+shift+h toggles the pane open/closed.
Right, but if we set that the plugin must receive focus after being raised, then both cmd/ctrl+i and cmd+shift+h will give focus to the Help's line edit widget (we don't have a way to make one shortcut work differently from the other).
In my case I find that panes will toggle closed when I just want them to be visible, e.g. in a tab group, requiring me to execute the shortcut twice (close pane, open pane with visibility in tab group but no cursor focus).
Could you expand on this? How is a pane closed through a shortcut? (I thought that was not possible).
On my mac, cmd+i shows the help pane without focus but updates to the requested context, while cmd+shift+h toggles the pane open/closed.
Right, but if we set that the plugin must receive focus after being raised, then both cmd/ctrl+i and cmd+shift+h will give focus to the Help's line edit widget (we don't have a way to make one shortcut work differently from the other).
Aahh, got it.
In my case I find that panes will toggle closed when I just want them to be visible, e.g. in a tab group, requiring me to execute the shortcut twice (close pane, open pane with visibility in tab group but no cursor focus).
Could you expand on this? How is a pane closed through a shortcut? (I thought that was not possible).
Absolutely. Perhaps this is just a bug for macOS, but the following screencast illustrates the behavior. The Outline, Projects, Help, and Variable Explorer panes are all open. Using the standard shortcuts (cmd+shift+o, cmd+shift+p, cmd+shift+h, cmd+shift+v), you can see that the panes are not made visible, but rather closed, and executing the shortcut again opens the pane (and makes visible) without cursor focus.
Perhaps this is related to macOS menubar items? I think the menubar items should open/close their respective panes (hence the checkmark), but perhaps the shortcut gives priority to this behavior, rather than just making the pane visible.
|
GITHUB_ARCHIVE
|
elma365pm is a command line utility that allows you to work with configuration objects and modules in ELMA365. It can be used to export and unpack a solution, a workspace, or a module from ELMA365 to the file tree and to pack and import them to ELMA365.
This utility is used to apply the principles of the DevOps culture to the development of low-code solutions and perform all stages of the Continuous integration / Continuous delivery pipeline.
Use these links to download the latest version of the utility compatible with our public cloud and the latest version of ELMA365 On-Premises:
To access help, run the following command after downloading the file:
The following features are currently available:
Pack a directory to an
elma365pm export solution
Export a solution from ELMA365 to a directory in the file tree.
elma365pm export namespace
Export a workspace from ELMA365 to a directory in the file tree.
elma365pm export module
Export a module from ELMA365 to a directory in the file tree.
Import objects to ELMA365.
elma365pm export configuration
Export an ELMA365 configuration into a file system directory. The command works in ELMA365 On‑Premises 2022.11 and above.
To get detailed instructions on each command, run the following:
elma365pm <command> --help
For example, you can export a solution from ELMA365 to a folder on the hard drive by running the following command:
elma365pm export solution --token=TOKEN --host=https://dev-elma365.myorg --out=my_solution --code=my_solution_code
Please note that if the exported solution has connections to components of another solution, the export function should use the parameter
--allow-deps with the value
elma365pm export solution --token= TOKEN --host= https://dev-elma365.myorg --out= my_solution --code= my_solution_code --allow-deps=true
If you do not use the parameter
--allow-deps or set to to
false, the solutions with the associated components will not be exported.
To pack a solution from the hard drive and import it back to ELMA365:
elma365pm import --token=TOKEN --host=https://dev-elma365.myorg --src=my_solution --version-up
File structure of a solution
When you unpack a solution as a group of files using the
export command, you will see the following structure in the target folder (in this example, the Memos solution is used):
PS InternalDocuments> tree /F
The top level consists of service folders, as the system architecture is microservice-based. Each service folder usually has the
manifest.json file and two folders,
resources. The root also stores the
package.json file that includes the main data of the exported solution or workspace.
Generally, there are three types of files in the structure:
- Configuration files are
.jsonfiles or files with no extension (that are actually also
.jsonfiles). These files include information about apps’ fields and settings, as well as processes, widgets, and modules.
- Script files are
.tsfiles that contain the code of scripts from processes, widgets, and modules.
Script files are unpacked only for easy viewing and code review. The content of these files is not included into the package when the solution is imported.
Note that starting from version 2022.11 you can edit unpacked script files and use autocomplete files to make working in code editors more convenient.
- Other resource and localization files are usually stored in the
resourcesfolder. Solution package localization is going to be considerably reworked, so now localization files are used only once during the import of a package. You can make changes directly to these files. They will be used to form a new package that can be imported with the utility.
Found a typo? Highlight the text, press ctrl + enter and notify us
|
OPCFW_CODE
|
On Monday, August 21 at the 2017 IEEE/ACM Hot Chips Symposium on High Performance Chips (HOTCHIPS), researchers from the University of California San Diego, Cornell University, University of Michigan and UCLA jointly unveiled Celerity, the first open-source, RISC-V tiered accelerator fabric system on chip with a neural network accelerator and 511 RISC-V processor cores.
UC San Diego Computer Science and Engineering (CSE) Ph.D. student Scott Davidson as well as Ph.D. students Khalid Al-Hawaj (from Cornell) and Austin Rovinski (U. Michigan) each gave a 30-minute talk at the HOTCHIPS Conference in Cupertino, CA. Their talk was one of only three academic talks out of a total 29 talks. HOTCHIPS is the premier conference where industry releases details of their latest chips, and the students shared the stage with developers of top chips being released by Intel, Nvidia, Google, AMD, and Qualcomm.
The Celerity SoC is a 5x5-mm 360-million-transistor chip in TSMC's advanced 16-nm technology, split between 5 Linux-capable RISC-V (pronounced risk-five) cores and a NOC-connected manycore of 496 RISC-V cores, plus a binarized neural network accelerator running at 625 MHz..
"The emerging RISC-V open-source software and hardware ecosystem provided a baseline to reduce design, implementation and verification effort," said CSE Professor Michael Taylor (who is joining the University of Washington this fall but will remain an adjunct faculty at UC San Diego). "Then we turned up the awesome factor to create the most powerful RISC-V system in history with 511 cores, a neural network accelerator, and on-chip synthesizable clock and voltage regulation. This is also arguably the most complex chip ever created in academia."
Most of the work of designing the Celerity SoC was done by a team of first- and second-year graduate students, including a large team of 12 from the labs of CSE faculty Taylor and Rajesh Gupta, and Electrical and Computer Engineering (ECE) professors Patrick Mercier and Ian Galton. The student team from UC San Diego included Ph.D. student Scott Davidson, Master’s students Anuj Rao and Paul Gao, visiting researcher Shaolin Xie, staff member Luis Vega, postdoctoral researcher Chun Zhao, former visiting graduate student Ningxiao Sun, and remote collaborator from India's IIT Rourkee, Bandhav Veluri (all in Taylor’s Bespoke Systems Group); Ph.D. student Atieh Lotfi (from Gupta’s lab); ECE grad student Julian Puscar (M.S. ’17) from Galton’s lab; and ECE Ph.D. students Xiaoyang Wang and Loai Salem from Mercier’s lab. The UC San Diego students worked closely with Ph.D. students advised by Prof. Ronald Dreslinski at U. Michigan and Profs. Chris Batten and Zhiru Zhang at Cornell.
"The RISC-V ecosystem played a critical role in enabling a relatively modest team of junior graduate students to fabricate a complex SoC in just nine months," said CSE Ph.D. student Scott Davidson. "While ultimately a success, we still faced non-trivial challenges that we hope the broader RISC-V community can address in the future."
The student presenters said the Celerity SoC achieves a speedup of 700 to 1,220 times due to their use of specialty and many-core tiers in collaboration.
The Celerity project grew out of the CERTUS initiative funded in 2016 by the Defense Advanced Research Projects Agency (DARPA) Circuit Realization At Faster Timescales (CRAFT) program. CERTUS was awarded the first phase of a $5 million, five-year effort to reduce the time it takes to design an SoC by a factor of ten (i.e., to do it in 16 weeks rather than the approximately 160 weeks it currently takes to design a custom ASIC chip for the Department of Defense). The CERTUS project focuses on high-performance SoCs that integrate one or more IP blocks.
In keeping with that mission, the Celerity SoC was specifically designed to contain an array of processing cores based on RISC-V technology to speed up the design process. The team leveraged not only the RISC-V instruction set, but also its software stack, the Rocket Linux-capable processor and memory system generators from Berkeley, as well as verification suite, and system-level hardware infrastructure.
The team taped-out the Celerity chip in April, barely nine months after starting work on the prototype, at an overall cost of approximately $1.3 million (small by comparison with advanced chips developed by industry). Theoretically, the SoC was designed primarily for use in autonomous vehicles, where the neural network accelerator can be critical to processing real-time sensor data in order to make split-second decisions to avoid a collision or other safety challenge.
RISC-V is a new instruction-set architecture (ISA) to support computer architecture research and education. Originally developed by computer scientists at UC Berkeley, RISC-V is fast becoming a standard open architecture for industry implementations under the governance of the RISC-V Foundation, a nonprofit corporation controlled by its members to drive adoption of the ISA (including several RISC-V-based implementations showcased at HOTCHIPS 2017).
"This is a team that worked like a charm," said Gupta in a Facebook post about the demo at HOTCHIPS. He went on to urge potential employers to "look for these students when they graduate. Each one is special."
The team expects first silicon of the Celerity in September, and they will present their results in an academic venue with a conference paper at the first Workshop on Computer Architecture Research with RISC-V (CARRV 2017), set for October 14, 2017 (and co-located with IEEE MICRO this year in Boston, MA).
|
OPCFW_CODE
|
import { TraverseHandler, TraverseEvent } from '../types';
import { isElement, isTextNode } from '../util/domUtils';
import { isAllWhitespace } from '../util/stringUtils';
export interface SiblingSplitPoint {
added: ChildNode[];
remainders: ChildNode[];
}
// the first node that's either an element, or a textnode with content
const getFirstContentNode = (nodes: ChildNode[]): ChildNode | undefined => {
return nodes.find((node) => {
if (isTextNode(node) && node.nodeValue && !isAllWhitespace(node.nodeValue)) {
return true;
}
if (isElement(node)) return true;
return false;
});
};
const getLastContentNode = (elements: ChildNode[]): ChildNode | undefined => {
return getFirstContentNode([...elements].reverse());
};
// The proposed SiblingSplitPoint will be the maximum amount of added siblings
// that fit before the region overflows, with the minimum remaunder. Therefore, if the
// proposal is not valid, the only direction to go is to try removing added nodes one by one.
export const findValidSplit = (
original: SiblingSplitPoint,
canSplitBetween: TraverseHandler[TraverseEvent.canSplitBetween],
): SiblingSplitPoint => {
let splitPoint = original;
while (splitPoint.added.length > 0) {
const { added, remainders } = splitPoint;
const prevEl = getLastContentNode(added);
const nextEl = getFirstContentNode(remainders);
if (!nextEl || !prevEl || !isElement(nextEl) || !isElement(prevEl)) {
// If we are not between two HTMLElements, the split can be considered valid.
// Plugins to prevent split can only run on elements.
return splitPoint;
}
if (canSplitBetween(prevEl, nextEl)) {
return splitPoint;
}
// try removing the last node and adding it to the remainder
const shifted = added.pop()!;
splitPoint = {
added: [...added],
remainders: [shifted, ...remainders],
};
}
// Proposed.added is empty. There is no way to add any of these
// sibling nodes while fulfilling the relevant plugins. This
// result will cause the parent element to also be removed.
return splitPoint;
};
|
STACK_EDU
|
import Debug from 'debug'
const dg = Debug('@:Hatebu')
async function getEntry(targetUrl) {
dg('[#getEntry] >>', targetUrl)
const url = `https://hatena.vercel.app/api/bookmark/getEntryWithStar?url=${targetUrl}`
const b = await fetch(url).then((r) => {
if (r.ok) return r.json()
return {}
})
const msg = `count:${b.count} bookmarks:${(b.bookmarks || {}).length}`
dg('[#getEntry] <<', msg)
return b
}
/* eslint-disable no-restricted-syntax, no-await-in-loop */
async function fetchStarSet(data) {
dg('[fetchStar]>>>>', data)
const comments = data.bookmarks.filter((d) => d.comment)
const hatebuStarSet = {}
for (const c of comments) {
const ymd = c.timestamp.match(/^(20..\/..\/..)/)[1]
const yyyymmdd = ymd.replace(/\//g, '')
const uri = `http://b.hatena.ne.jp/${c.user}/${yyyymmdd}%23bookmark-${data.eid}`
const url = `https://s.hatena.com/entry.json?uri=${uri}`
const item = await fetch(url).then((r) => {
if (r.ok) return r.json()
return {}
})
if (item.entries.length > 0) {
const cnt = item.entries[0].stars.length
if (cnt > 0) hatebuStarSet[c.user] = cnt
}
}
dg('[fetchStar]<<<<', hatebuStarSet)
return hatebuStarSet
}
export default {
getEntry,
fetchStarSet,
}
|
STACK_EDU
|
Looking back, and forward
Freshly whipped WebGPU, with ice cream
Reinventing rendering one shader at a time
Question the rules for fun and profit
The known unknown knowns we lost.
On the nature of our convictions.
Solve all your shader problems with this one weird trick.
Teaching Johnny what thinking is
Cultural assimilation, theory vs practice.
Making code reusable is not an art, it's a job.
Doing React-like things without React.
Visual programming for coders.
Let's actually whiteboard some code.
The boy who cried leopard.
Lies, damned lies, and social media.
A tale from the loop.
A computer is an educational device.
How to mismanage your product and alienate your core audience
Real-Time Database Products By Google™
Oh, how we used to laugh.
On the reason why software isn't better.
Immigration from the inside.
How to stay sane in a world of remote, async work.
An easy tutorial for beginners in Rust.
MVC was a mistake.
Up and down the ladder of needlessly recomputing things.
REST vs GraphQL, a pox on both houses.
A retrospective on cultural shifts in tech and elsewhere.
Internet activism and media in the age of social justice.
Functional GLSL metaprogramming.
PowerPoint must die.
On the existential crisis in gaming and the role of game design.
Why HTML/CSS is broken and how to fix it.
Just a shiny demo or the future of browser computing?
Putting math into motion and controlling it precisely, with a little help from Isaac Newton and Admiral Ackbar.
A new design for Acko.net, fusing WebGL, CSS 3D and HTML at sixty frames per second.
Usability, affordance and grannies in Vegas. On mobile phones.
Observations on gender, feminism and harrassment.
You can transform your ordinary browser into a lush 3D world with one click. Why should you care?
Exploring the outer limits: on the nature of infinity, continuity and convergence.
A tale of numbers that like to turn: a different look at complex numbers and the strange things they do.
Presentation-quality math with Three.js and WebGL.
In which I make a creepy disembodied head in your browser.
If the world is going to end in 2012, Acko.net will at least go out in style: I've redesigned.
I couldn't resist making a demo for the JS1K contest. So I pulled out my bag of tricks from my Winamp visualization days.
In this multi-part series I try to make a procedural planet generator that runs on the GPU.
I designed a 'farewell' page for Leuven Speelt, a student theater group run by friends.
A thorough breakdown of the thorny problem of handling textual data on and around the web.
An easy-to-use, compact jQuery color picker.
AVS was a music visualizer that shipped with Winamp, popular in the early 2000s. I made lots of visuals for it.
|
OPCFW_CODE
|
Discussion in 'General Hardware' started by strick94u, Feb 8, 2007.
Ok my note book has an amd 64 3400+ in a 939 socket can this be up graded ?
does anyone know?
Chances are the CPU is soldered to the motherboard. If you can take it apart without destroying it you can check and see though.
I have opened a massive amount of notebooks, and none had their cpu soldier on...the only ones is the one with a cyrix or a transmeta.
KTR is correct then. I've never owned a laptop so I've never taken one apart, obviously I collected some false information somewhere.
I have some older tough books theirs are solidered p2 and p3 And I was mistaken mines a socket 754 looks like 3700 + is it thats not much more
I suppose we're both right then. Probably the only way to tell then is find some technical data on your specific model or take it apart and see.
Thats probably the best that socket can do in a notebook, check link here.
um think u can get a t50 or something (turion dual core) for 754 and use it, its the best 754 chip u can get.
other then that the 4000+ newark is a great chip.
u just need to update the books bios first, but any laptop with 754 can take a turion chip from my exp, the dual cores may need a bios update to be recognized properly by the lappy.
The other cores available for the 754 mobile socket are here;
http://www.newegg.com/Product/Produ...&SubCategory=343&ATT=AMD Socket 754 Processor
But the 3700 he has is the fastest at 2.4ghz.
newark is 2.4 with 1mb cache i think, also is 90nm not 130.
the dual core turion for 754 is hard to find but is the best 754 chip you can get today.
Can't find a Newark for socket 754, No X2 core's for socket 754!!(Search of AMD.Com)
Yeah, the CPU is generally removeable. The gfx card on most laptops (excluding the Dell XPS series) are soldered on and not user replaceable.
ml-44 is the highest @ 2.4ghz single core...
while the tl-62 is the highest @ 2.2ghz dual core...but i cant seem to locate and 64...so the next in line is the tl-60...
they arent nesserly listed on the specs site, mainly because the newark never was a retail chip AFIK, but the egg had them for a long time, most expencive 754 chip ever made(newark)
try froogle if you dont like fleabay
Can't argue with that, !!
Often a new HDD can make a big performance difference. So can extra RAM. Both are easy upgrades and might stop you happy and keep you from going bonkers.
You're going to find the upgrades too expensive for the change in performance. I'd go with bonkers, hdd and ram are cheaper to upgrade and give a better bang-for-buck performance boost.
faster hdd seems to do the higher bottleneck on most laptops, because majority uses the 4200 or 5400rpm hdds...possible by switching to 7200 would be faster...
the highest performing hdd would be a 100gb 7200rpm..
acctualy they have 120 and 160gb 7200rpm drives now
and one company had a 15k rpm 1.8in drive(bet that sucker got hot as hell!!!)
hmmm...i cant seem to find any...
egg had some 120-160gb lappy drives a while back, duno how limmited the quintity where tho, they where hitchi branded i think, try pricewatch or froogle, newegg dosnt alwase have a huge slection of top of the line laptop upgrades, probbly because they are a LOW volume item
none...i have never seen any 7200 hdds above 100gb.
Separate names with a comma.
|
OPCFW_CODE
|
Operads are algebraic devices offering a formalization of the concept of operations with several inputs and one output. Such operations can be naturally composed to form more complex ones. Coming historically from algebraic topology, operads intervene now as important objects in computer science and in combinatorics. A lot of operads involving combinatorial objects highlight some of their properties and allow to discover new ones.
This book portrays the main elements of this theory under a combinatorial point of view and exposes the links it maintains with computer science and combinatorics. Examples of operads appearing in combinatorics are studied. The modern treatment of operads consisting in considering the space of formal power series associated with an operad is developed. Enrichments of nonsymmetric operads as colored, cyclic, and symmetric operads are reviewed.
Samuele Giraudo is an associate professor at LIGM, University of Paris-Est Marne-la-Vallée in France. He received a PhD and then an accreditation to supervise research, both in computer science. His research interests are primarily in combinatorics and algebraic combinatorics. His research works focus on Hopf bialgeras, operads, and applications of methods coming from algebra to solve enumerative problems.
- Chapter 1. Combinatorial Structures 1. Combinatorial sets and combinatorial objects 1.1 Combinatorial sets 1.2 Operations over combinatorial sets 1.3 Main combinatorial sets 2. Combinatorial spaces 2.1 Polynomials on combinatorial sets 2.2 Operations over combinatorial spaces 2.3 Combinatorial bialgebras 2.4 Types of combinatorial bialgebras- Chapter 2. Trees and rewrite rules 1. Syntax trees 1.1 planar rooted trees 1.2 Families of planar rooted trees 1.3 syntax trees 2 Rewrite rules 2.1 Combinatorial posets 2.2 Rewrite rules on combinatorial sets 2.3 Rewrite ruls on combinatorial spaces 2.4 Syntax tree patterns and rewrite rules- Chapter 3 Combinatorial operands 1. Nonsymmetric operands 2 Free operands 3 Presentation by generators and relations 4 Algebras over operands 5 Koszul operands and Koszul duality- Chapter 4 Main combinatorial operands 1. Associative and magmatic operands 2 Operands and permutations 3 Dendriform operand 4 operands of rational fractions- Chapter 5 Constructions, applications and generalizations 1. Series on operands 2 Functors to operands 3 Functors from operands 4 Beyond operands
|
OPCFW_CODE
|
[BUG] TypeError on 'map' when attempting to explore JSON (and therefore convert to anything) running Version >= 10.10.0
Describe the bug
A clear and concise description of what the bug is.
TypeError when attempting to explore JSON (and therefore convert to anything) running Version >= 10.10.0
Unable to explore JSON (and therefore convert to anything) running Version 1.10.1 and 10.10.0 while working with 1.8
To Reproduce
Steps to reproduce the behavior:
Open workspace
Right on ns.conf file, use F5 Flipper "Explore ADC/NS (.conf/tgz)
Initial parsing appears to work, applications are shown.
Select one of the applications, click on the "{ }" , nothing is shown, output logs indicate TypeError occurred
Expected behavior
Expect a json to be generated and displayed in a new editor tab/window.
Screenshots
See logs below.
1 [2024-05-23T17:52:04.925Z] [INFO]: refreshing ns diagnostic rules and tree view
1 [2024-05-23T17:52:04.925Z] [INFO]: loading ns diagnosics rules file
1 [2024-05-23T17:52:13.064Z] [ERROR]: --- unhandledRejection --- [TypeError: Cannot read properties of undefined (reading 'map')
at mungeNS2FAST (/Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/ns2FastParams.js:109:107)
at NsCfgProvider.<anonymous> (/Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/nsCfgViewProvider.js:438:73)
at Generator.next (<anonymous>)
at /Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/nsCfgViewProvider.js:15:71
at new Promise (<anonymous>)
at __awaiter (/Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/nsCfgViewProvider.js:11:12)
at NsCfgProvider.render (/Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/nsCfgViewProvider.js:427:16)
at /Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/extension.js:191:52
at Generator.next (<anonymous>)
at /Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/extension.js:31:71
at new Promise (<anonymous>)
at __awaiter (/Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/extension.js:27:12)
at /Users/michaelj/.vscode/extensions/f5devcentral.vscode-f5-flipper-1.10.1/out/extension.js:187:100
at r.h (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:153:189465)
at r.$executeContributedCommand (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:153:190325)
at c.S (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:150:5505)
at c.Q (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:150:5271)
at c.M (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:150:4361)
at c.L (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:150:3579)
at a.value (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:150:2227)
at o.y (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:82:660)
at o.fire (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:82:877)
at u.fire (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:107:14175)
at a.value (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:176:8023)
at o.y (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:82:660)
at o.fire (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:82:877)
at u.fire (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:107:14175)
at MessagePortMain.<anonymous> (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:176:6303)
at MessagePortMain.emit (node:events:517:28)
at MessagePortMain._internalPort.emit (node:electron/js2c/utility_init:2:2285)]
Desktop (please complete the following information):
OS: macOS 14.5
VS Code: Version: 1.89.1
Commit: dc96b837cf6bb4af9cd736aa3af08cf8279f7685
Date: 2024-05-07T05:14:32.757Z (2 wks ago)
Electron: 28.2.8
ElectronBuildId: 27744544
Chromium: 120.0.6099.291
Node.js: 18.18.2
V8: 12.0.267.19-electron.0
OS: Darwin arm64 23.5.0
F5 Flipper 10.10.1 and 10.10.0 have symptoms.
F5 Flipper 10.8.0 Does not have the symptom.
I'm thinking there must be some parameter in my NS config file that is not mapping well with the new code versions, because the "Lead Example / Test NS Config" seems to work as expected.
Offering a pull request for the workaround, https://github.com/f5devcentral/vscode-f5-flipper/pull/37
Canceling that, found the issue after a few more minutes of looking.
The object nsFastJson was missing the monitors.
So this:
const nsFastJson = {
tenant_name: nsApp.name,
app_name: nsApp.name,
type: nsApp.type,
protocol: nsApp.protocol,
virtual_address: nsApp.ipAddress,
virtual_port: nsApp.port === '*' ? '0' : nsApp.port,
pool_members: []
};
Should be:
const nsFastJson = {
tenant_name: nsApp.name,
app_name: nsApp.name,
type: nsApp.type,
protocol: nsApp.protocol,
virtual_address: nsApp.ipAddress,
virtual_port: nsApp.port === '*' ? '0' : nsApp.port,
pool_members: [],
monitors: []
};
This seems to fix the issue and not prevent the monitors from coming over.
Offering a pull request with fix , 38
this fix will in the pending v1.11.0 release
|
GITHUB_ARCHIVE
|
Fujitsu releases high-performance file system
Fujitsu today announced the launch of FEFS (Fujitsu Exabyte File System), a scalable file system software package for building file systems for x86 HPC clusters in Japan.
FEFS is software for x86 HPC cluster systems that enables high-speed parallel distributed processing of very large amounts of read/write transactions from the compute nodes. The software achieves the worlds highest throughput speed of 1 TB/s from the compute nodes to the file system. In addition, it includes superior features for system scalability, high reliability for zero operational downtime, and actual operational convenience. It delivers high speeds and large-scale data processing performance increasingly demanded of file systems in accordance with performance improvements and increases in the scale of cluster systems. This in turn, contributes to improvements in overall system performance.
To meet the wide-ranging needs of customers, Fujitsu is offering file system solutions that combine its PRIMERGY x86 servers with its ETERNUS storage system and the new FEFS.
Computer-based analysis and simulation can be used to reduce costs and shorten development times. It is currently being actively used in manufacturing and many other industries. Increasingly, x86 HPC clusters, which use multiple x86 servers for parallel processing, are becoming the dominant platform used for such analyses and simulations.
With the improved performance of x86 HPC cluster systems in recent times, file systems have emerged as a source of performance bottlenecks. There is an increasing need for file systems that can deliver higher speeds and large-scale data processing capabilities.
Fujitsus newly-developed FEFS software enables high-speed parallel distributed processing of very large amounts of read/write transactions from the compute nodes, creating a large-scale file system with high performance and high reliability.
Fujitsu is committed to providing the best file system solutions for analysis and simulation applications and a variety of other fields.
FEFS was developed based on the Lustre open source software, with proprietary feature enhancements added by Fujitsu. From x86 HPC cluster systems consisting of several dozen servers to massive systems comprised of up to a million servers, FEFS enables file systems with superior scalability, performance, reliability, and convenience to support a wide range of systems.
The main feature enhancements of FEFS are as follows.
It enables scalability of file systems from terabyte-scale systems to a maximum of 8 exabytes (1,000 petabytes), depending on data volume requirements.
It can be used as a file system offering superior price-performance for clusters consisting of several dozen nodes, and it can be used for large scale clusters comprised of up to a million servers.
It enables the configuration of systems consisting of 10,000 storage systems with the world's highest throughput speed of 1 TB/s.
It achieves metadata management performance capable of creating several tens of thousands of files per second, approximately 1-3 times the performance of Lustre.
Due to built-in redundancies at all levels of the file system (such as disk RAID configuration, InfiniBand network multipath configuration, and configurations of multiple servers and storage units), it enables failovers(3) while jobs are being executed.
Fair share features for allocating resources among users prevent a particular user from monopolizing I/O processing resources.
Priority control settings for the operation of each node guarantees I/O processing bandwidth for each node.
Directory level quota functions enable efficient use of disk capacity by monitoring and managing fine levels of file system activities.
Smallest configuration: four PRIMERGY RX300 S6 servers (with InfiniBand connection), three ETERNUS DX80 S2 storage units, and FEFS license
|
OPCFW_CODE
|
Hibernate ORM version 4.1.4 has just been released. This is a minor bug fix release containing 37 bug fixes, See the changelog for the complete list of fixes. Specific fixes of note include:
- HHH-7074 - The @org.hibernate.annotation.Entity annotation had been deprecated for a while to in favor of the JPA one -- @javax.persistence.Entity, to use the features were defined in Hibernate @Entity annotation attributes now should use the new defined annotations, for example, @org.hibernate.annotations.Immutable, see the javadoc for more details. Also, this deprecated @org.hibernate.annotation.Entity will be removed in Hibernate ORM 5.0, so, it would better to start migrate your code now.
- HHH-7306 SessionFactory#openSession() could not be used in a tenant-aware scenario due to this issue, because hibernate could not know which tenant identifier to use, we now have a contract named org.hibernate.context.spi.CurrentTenantIdentifierResolver for Hibernate to be able to resolve what the application considers the current tenant identifier. The fix of this issue also makes using mutil-tenant possible within HEM (still under development, see HHH-7312). To learn more about this, please check out the Dev Guide.
- HHH-7350 A Readonly/Immutable Entity with 2LC enabled should be removable from cache.
- HHH-3961 SQLServerDialect now supports nowait in LockMode.UPGRADE_NOWAIT, thanks Guenther for the pull request.
- HHH-6846 / HHH-6256 / HHH-7356 The
javax.persistence.lock.timeoutsetting was ignored by @NamedQuery, persistence.xml and query hint.
As usual, the artifacts have been uploaded to the JBoss Nexus repository (and will be synchronized to Maven Central in a day or two) and the release bundles have been uploaded to the Hibernate SourceForge in both ZIP and TGZ formats.
Last but not the least, I would like to give special thanks to Lukasz Antoniak and Adam Warski, Hibernate Envers Developers from community and who are helping us on Envers module testing on DB matrix, Lukasz (with Adam's help) worked tirelessly and fixed ALL envers test failures we found, Great work and keep contributing :D
dear hibernate users,
I helped some community users on IRC recently which run into issues when migrating to 4.0, and the most popular issue I'm seeing is caused by the hibernate-anntations module dependency, so, I would like to explain it again.
we MERGED this module into hibernate-core since hibernate core 3.6 release, see the release note of 3.6.
so, maven / gradle users, you just need to add hibernate-core ( and hibernate-entitymanager if you're using JPA ) to your dependency list, and remove hibernate-annotation dependency if it is there.
before hibernate-core 3.5, it was JDK 1.4 compatible, so, to use Annotation, the new feature of JDK 1.5, we had to create a new module, aka, hibernate-annotations
but since we had moved to JDK 1.5 since hibernate-core 3.5, there is no reason to keep hibernate-annotations as a separated module, so, we merged back into hibernate-core.
------------------------ chinese below ---------------------------
我注意到很多人在迁移到hibernate orm 4.x的过程中, 遇到最多的一个问题是关于hibernate-annotations的, 这里, 需要再重申一下, 从hibernate core 3.6开始, 就没有hibernate-annotations这个项目了, 它已经被整合进了hibernate-core, 所以, 如果你在使用hibernate-core 3.6 / 4.x的话, 请把hibernate-annotation这个依赖移除掉 (事实上, 如果有的话, 那你可能在使用错误的版本, 因为根本就不存在 3.6 / 4.x 版本的hibernate-annotations)
在hibernate core 3.2 / 3.3的时代, hibernate-core需要兼容JDK1.4, 所以为了能够使用JDK1.5中出现的annotation, 我们必须创建一个新的项目, 即hibernate-annotations, 而从hibernate-core 3.5开始, 我们抛弃了JDK1.4转向了1.5, 所以, 就没有必要在把annotations放在一个单独的项目当中了.
Hibernate Core 4.0.0.CR7 has just been released. The complete list of changes can be found in the JIRA release notes.
In this release, we have some performance improvement issue resolved, for example:
- HHH-5945 - Race condition in building query cache
- HHH-6845 - Avoid repeated invocations of ReflectHelper.overridesEquals in proxy initializers
- HHH-6858 - Minor performance improvements after hotspots analysis
- HHH-6862 - Reuse cached entryArray of IdentityMap in StatefulPersistenceContext as much as possible
- HHH-6868 - Lazily initialize HashMap in LockOptions
- HHH-6286 - UpdateTimestampsCache should try to avoid acquiring lock if possible
And in this release, we also fixed lots of bugs found with running hibernate tests on supported DB matrix (the matrix is not complicated, we only runs on a small subset of hibernate supported DBs), we're working hard to resolve all failures we found, and keep the CI job clean (the CI job can be accessed here ).
P.S. I really hope this is our last CR release. :D
Dear DB vendors, contributors, and others,
If you would like to run hibernate tests on other DB besides the default one (H2), please read this doc first.
For example, if you're working on a dialect to support your DB, it would better to run hibernate tests on it and make sure all tests pass(or at least, skip the failed test with a JIRA and explain why it should be skipped).
Hibernate Core CI can be viewed here , you can also download the nightly build from here and view javadoc.
Hibernate Core Matrix Testing job is here , currently, we run hibernate tests on :
- DB2 v9.7
- Oracle 11gR1
- Oracle 11gR1 RAC
- Oracle 11gR2
- Oracle 11gR2 RAC
- MySql 5.1
- Sql Server 2008 R1
- Sql Server 2008 R2
- PostgreSql 8.4
- Sybase ASE 15.5
We're choosing these DBs since they are widely used and they are supported by JBoss product , so JBoss QA team maintains them for us, we (hibernate team) do not have resource/time to maintain DB instances for testing.
For other DB(vendors), if you'd like to let us run hibernate tests on your DB, we are happy to add it to our matrix testing job, if you could provide us the DB connection info and maintain the DB instance yourself.
BTW, we're working hard to get all failures fixed before Hibernate Core 4.0 Final release, so, any help would be appreciated :D
I saw someone is asking
what new in Hibernate Core 4.0, so I take sometime and try to summary it here.
first of all, please see:
- the migration guide for 4.0
- JIRA filter link which lists all improvements and new features in Hibernate Core 4.0.0, you can get all details from this link :)
- move to gradle for builds
- Redesign SessionFactory building
- Introduction of services (see this for more details)
- Improved metamodel (not in 4.0.0.Final yet, we planned this, but due to the tasks are more than we expected, and it would take too long to get 4.0 out, so we decided to move this out of 4.0.0.Final but will be upcoming release soon see this for more details, and this is a design document)
- Initial osgi-fication by package splitting (public, internal, spi)
- Support for multi-tenant databases (see this for more details)
- Migration to i18n logging framework (using jboss logging)
- JDK 1.6 (JDBC4) as baseline
- and more (I can't remember all the things :)
Hibernate Core 4.0.0.CR2 has just been released. The complete list of its changes can be found in the JIRA release notes.
- HHH-6586 As Steve said in this post, we will continue new metamodel development after 4.0.0 release, we think the deprecation of Configuration may confuse the community, so in this issue, we removed the deprecation tag from this class, and we will continue to support this API until we get new metamodel ready, at which point this class will become deprecated and scheduled for removal in 5.0.
- HHH-6622 Upgrade to Hibernate Commons Annotations 4.0.0.CR2, now, we have fully moved to jboss logging, no slf4j-api dependency required anymore.
- HHH-6618 We have wrote a new gradle plugin which can be used to run hibernate functional tests on different DBs besides H2, this is useful for community contributors, see hibernate-core/buildSrc/readme.txt for more details.
|
OPCFW_CODE
|
Java Servlet with Multi-threading
I am trying to create multiple output text data files based on the data present in the servlet request. The constraints to my servlet are that:
My servlet waits for enough requests to hit a threshold (for example 20 names in a file) before producing a file
Otherwise it will timeout after a minute and produce a file
The code I have written is such that:
doGet is not synchronized
Within doGet I am creating a new thread pool (reason being that the calling application to my servlet would not send a next request until my servlet returns a response back - so I validate the request and return an instant acknowledgement back to get new requests)
Pass over all request data to the thread created in a new thread pool
Invoke synchronized function to do thread counting and file printing
I am using wait(60000). The problem is that the code produces files with correct threshold (of names) within a minute, but after the timeout of a minute, the files produced (a very few) have capacity exceeded for example, names more than what I have defined in the capacity.
I think it has something to do with the threads who when wake up are causing an issue?
My code is
if(!hashmap_dob.containsKey(key)){
request_count=0;
hashmap_count.put(key, Integer.toString(request_count));
sb1 = new StringBuilder();
sb2 = new StringBuilder();
sb3 = new StringBuilder();
hashmap_dob.put(key, sb1);
hashmap_firstname.put(key, sb2);
hashmap_surname.put(key, sb3);
}
if(hashmap_dob.containsKey(key)){
request_count = Integer.parseInt(hm_count.get(key));
request_count++;
hashmap_count.put(key, Integer.toString(request_count));
hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted));
}
hashmap_dob.get(key).append(dateofbirth + "-");
hashmap_firstname.get(key).append(firstName + "-");
hashmap_surname.get(key).append(surname + "-");
if (hashmap_count.get(key).equals(capacity)){
request_count = 0;
dob = hashmap_dob.get(key).toString();
firstname = hashmap_firstname.get(key).toString();
surname = hashmap_surname.get(key).toString();
produceFile(required String parameters for file printing);
fileHasBeenPrinted = true;
sb1 = new StringBuilder();
sb2 = new StringBuilder();
sb3 = new StringBuilder();
hashmap_dob.put(key, sb1);
hashmap_firstname.put(key, sb2);
hashmap_surname.put(key, sb3);
hashmap_count.put(key, Integer.toString(request_count));
hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted));
}
try{
wait(Long.parseLong(listenerWaitingTime));
}catch (InterruptedException ie){
System.out.println("Thread interrupted from wait");
}
if(hashmap_filehasbeenprinted.get(key).equals("false")){
dob = hashmap_dob.get(key).toString();
firstname = hashmap_firstname.get(key).toString();
surname = hm_surname.get(key).toString();
produceFile(required String parameters for file printing );
sb1 = new StringBuilder();
sb2 = new StringBuilder();
sb3 = new StringBuilder();
hashmap_dob.put(key, sb1);
hashmap_firstname.put(key, sb2);
hashmap_surname.put(key, sb3);
fileHasBeenPrinted= true;
request_count =0;
hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted));
hashmap_count.put(key, Integer.toString(request_count));
}
If you have got to here, then thank you for reading my question and thanks in advance if you have any thougths on it towards resolution!
FYI - Camel casing is the generally accepted naming convention for Java variables.
Thanks for the reminder Robin
I didn't look at your code but I find your approach pretty complicated. Try this instead:
Create a BlockingQueue for the data to work on.
In the servlet, put the data into a queue and return.
Create a single worker thread at startup which pulls data from the queue with a timeout of 60 seconds and collects them in a list.
If the list has enough elements or when a timeout occurs, write a new file.
Create the thread and the queue in a ServletContextListener. Interrupt the thread to stop it. In the thread, flush the last remaining items to the file when you receive an InterruptedException while waiting on the queue.
Nice approach. But this will require additional background thread to run to monitor BLOCKING_QUEUE. Servlet will exit once request is completed. Then how do you suggest to do monitor and act on BlockingQueue with in Web Application itself.
Create the thread + queue in a ServletContextListener. That way, you can also cleanly stop the thread.
My guess is that he is writing his own server, so he won't use Java EE and ServletContextListener. But I do think he should use a BlockingQueue and a worker thread(s).
I am using Jave EE with Tomcat and I agree that ArrayBlockingQueue might have the right solution. So now I would try that but thanks guys for your precious comments.
As per my understanding, you want to create/produce a new file in two situations:
Number of request hit a predefined threshold.
Threshold time-out completes.
I would suggest following:
Use APPLICATION-SCOPED variable: requestMap containing object of HttpServletRequest.
On every servlet hit, just add the received request to map.
Now create listener/filter requestMonitor whatever is suitable, to monitor values of requestMap.
RequestMonitor should check if the requestMap has grown to predefined threshold.
If it has not, then it should allow servlet to add request object.
If it has, then it should print file, empty requestMap, then allow Servlet to add next request.
For timeout, you can check when the last file was produced with LAST_FILE_PRODUCED variable in APPLICATION_SCOPE. This should be updated every time file is produced.
I tried to read your code, but there is a lot of information missing, so if you could please give more details:
1) the indentation is messed up and I'm not sure if there were some mistakes introduced when you copied your code.
2) What is the code you are posting? The code that is called on some other thread after by doGet?
3) Maybe you could also add the variable declarations. Are those thread safe types (ConcurrentHashMap)?
4) I'm not sure we have all the information about fileHasBeenPrinted. Also it seems to be a Boolean, which is not thread safe.
5) you talk about "synchronized" functions, but you did not include those.
EDIT:
If the code you copied is a synchronized method, that means if you have many requests, only one of them only ever runs at a given time. The 60 seconds waiting is always invoked it seems (it is not quite clear with the indentation, but I think there is always a 60 seconds wait, whether the file is written or not). So you lock the synchronized method for 60 seconds before another thread (request) can be processed. That could explain why you are not writing the file after 20 requests, since more than 20 requests can arrive within 60 seconds.
1 - I manually tried to indent it and had changed the variable names from original code due to data sensitivity. Clearly my indentation has not worked otu well so sorry about that.
2 - this is a code within my synchronized method where I count the threshold and then print the file. This method is called from doGet. However as I explained earlier, I have created a new pool of threads from doGet and from that pool thread this method is being invoked.
3 - As the code posted is within a synchronized method but I have initiated my hashmaps at global level. I am not using ConcurrentHashMaps, is this something which would be causing a problem. Please note that I only make changes to the Hashmaps within sysnchronized method so I believe using concurrentHashmap would not make any difference?
4 - You are correct about fileHasBeenPrinted being boolean. Again I agree it's not thread safe but because any changes I make to fileHasBeenPrinted are within sysnchronized method (only declaration at global level), I think it would not be an issue but again I might be wrong. Please let me know your thoughts!
5 - I hope this has now been answered as the code is an extract from the synchronized method. Just to give more information, the produceFile method within the extract is also synchronized. thanks
Also I wanted hashmaps keys i.e. the string buffer where i append the data & my threshold counter to be modified by each thread entering the sysnchronized method.
Reply to EDIT: You are absolutely correct in saying everyrequest waiting 60 seconds but not the one which would hit the threshold. This is how I have the requirements for design. Please note that the files are produced as per the threshold. The problem is only with files which get produced after the timeout of a minute.
|
STACK_EXCHANGE
|
Oxford students going to Berkeley
The following groups at Berkeley would be interested in hosting students:
Chris Chang: http://chemistry.berkeley.edu/faculty/chem/chris-chang
Michelle Chang: http://chemistry.berkeley.edu/faculty/chem/michelle-chang
Jennifer Doudna: http://chemistry.berkeley.edu/faculty/chem/doudna
Matt Francis: https://chemistry.berkeley.edu/faculty/chem/francis
Jay Groves: https://chemistry.berkeley.edu/faculty/chem/groves
Michael Marletta: https://chemistry.berkeley.edu/faculty/chem/marletta
Evan Miller: http://chemistry.berkeley.edu/faculty/chem/evan-miller
Dan Nomura: https://chemistry.berkeley.edu/faculty/chem/nomura
Jonathan Rittle: https://chemistry.berkeley.edu/faculty/chem/rittle
Alanna Schepartz: https://chemistry.berkeley.edu/faculty/chem/schepartz
Ke Xu: http://chemistry.berkeley.edu/faculty/chem/xu
Oxford students can for example conduct their Part II at Berkeley. They will need to seek faculty approval, which is normally achieved by submitting a project outline and the name of a co-supervisor to Nina Jupp in the Faculty office, by the end of week 3, Hilary term. The project will be assessed by the chairman of the Chemistry Teaching Committee, who must approve both project and co-supervisor before work can start.
Professors Ben Davis and Hagan Bayley are contact supervisors who may be able to give some more specific advice.
To get additional information about the Berkeley-Oxford Exchanges, Ari Razavi (the program coordinator for the Chemical Biology Graduate Program, USA) and from Prof Chris Chang (the faculty director of this program and Vice Chair of Chemical Biology in the department, USA). He suggests that if needed you can also email both of them with logistical questions and they can help match you with faculty. Once that is set, the faculty and their group admins can help with visas, etc.
Berkeley students going to Oxford
The following groups at Oxford would be interested in hosting students:
Hagan Bayley: http://research.chem.ox.ac.uk/hagan-bayley.aspx
Tom Brown: http://research.chem.ox.ac.uk/professor-tom-brown.aspx
Ben Davis: http://research.chem.ox.ac.uk/ben-davis.aspx
Philipp Kukura: http://research.chem.ox.ac.uk/philipp-kukura.aspx
Chris Schofield: http://research.chem.ox.ac.uk/christopher-schofield.aspx
UC Berkeley students can apply to study abroad through the site below.
Part II Berkeley Programme
- Part II student enquires by email about whether Berkeley supervisor will accept them
- Together Berkeley supervisor and student write 250-word description of project
- The project must be submitted to Nina Jupp by 3rd week of HT for assessment by the Faculty Office.
- Ben Davis and/or Hagan Bayley are named as co-supervisors in Oxford to act as coordinators of exam process.
- students should start at the same time and roughly follow Oxford Part II terms as they see fit and submit theses to the same submission dates and standards;
- the department cannot advise on university or college fees (since these are processed by colleges) or costs for additional expenses (fees, living costs etc at Berkeley). Berkeley and certain college schemes might be able to help with funding schemes upon discussion;
- the contacted supervisor at Berkeley may also provide advice on accommodation but the students might also be expected to use their initiative;
- delineating a project in conjunction with the Berkeley supervisor and processed as agreed under dept process (see above) is the nub of the process.
|
OPCFW_CODE
|
I do know what you are considering. The word low cost doesn't go along with designer handbags. I am right here to tell you that yes, it will probably and when you store proper, it does. Handbags have been my passion for years. I was a single woman for a very very long time (into my late 40's) so I had nobody discouraging me from spending money shopping for designer handbags. However, shopping for designer handbags can be an costly previous time. If I wished to indulge my ardour I needed to find out how to find and benefit from buys on discount designer handbags.
Now, by low cost designer handbags I imply Ferragamo, Prada, Balenciaga and more. The easiest way to search out nice buys on designer handbags is by watching finish of season sales offered by the large excessive end vogue sellers on-line. salvatore ferragamo mens belt is how I've gotten handbags from Zac Posen, Fendi and Tod for as much as 50% off. You may additionally find handbags from designers like Dior, Versace, Derek Lamb and Mark Jacobs. These are very high finish designers. Another manner I save money on designer handbags is to buy handbags from lesser identified designers.
Just because you don't pay $1000, $2000 or extra for a handbag that does not imply it isn't a high quality properly made bag. I have started shopping for what are often known as celeb handbags. These are very trendy handbags based mostly on handbags that the Hollywood celebrities carry, however with a twist. The twist is that manufacturers take the perfect design elements from the unique handbag and work them into a brand new handbag. What this implies is that you've got all the best of the costly unique design at a literal fraction of the worth of the unique.
ferragamo loafers are rapidly changing into the most popular sort of handbags on the market as we speak. Price is not the only cause, however it is a big one. The manufacturers of those top quality development handbags use quality faux leather and in some cases actual leather of their designs. They use high quality hardware and listen to particulars like stitching which implies their handbags aren't only a bargain, they look like they cost much greater than they actually do.
What kind of handbag design are you on the lookout for? Are you within the market for a messenger bag or a satchel? Are you search for a night clutch or a ought to bag? You are assured to seek out anybody of those trendy handbag kinds online. Remember, look at the sales at excessive end on-line fashion sellers and in addition check the good trendy handbags primarily based on superstar types.
|
OPCFW_CODE
|
Archive (http, zip)
This command instructs pgloader to load data from one or more files contained in an archive. Currently the only supported archive format is ZIP, and the archive might be downloaded from an HTTP URL.
Using advanced options and a load command file
The command then would be:
$ pgloader archive.load
And the contents of the
archive.load file could be inspired from the
LOAD ARCHIVE FROM /Users/dim/Downloads/GeoLiteCity-latest.zip INTO postgresql:///ip4r BEFORE LOAD DO $$ create extension if not exists ip4r; $$, $$ create schema if not exists geolite; $$, EXECUTE 'geolite.sql' LOAD CSV FROM FILENAME MATCHING ~/GeoLiteCity-Location.csv/ WITH ENCODING iso-8859-1 ( locId, country, region null if blanks, city null if blanks, postalCode null if blanks, latitude, longitude, metroCode null if blanks, areaCode null if blanks ) INTO postgresql:///ip4r?geolite.location ( locid,country,region,city,postalCode, location point using (format nil "(~a,~a)" longitude latitude), metroCode,areaCode ) WITH skip header = 2, fields optionally enclosed by '"', fields escaped by double-quote, fields terminated by ',' AND LOAD CSV FROM FILENAME MATCHING ~/GeoLiteCity-Blocks.csv/ WITH ENCODING iso-8859-1 ( startIpNum, endIpNum, locId ) INTO postgresql:///ip4r?geolite.blocks ( iprange ip4r using (ip-range startIpNum endIpNum), locId ) WITH skip header = 2, fields optionally enclosed by '"', fields escaped by double-quote, fields terminated by ',' FINALLY DO $$ create index blocks_ip4r_idx on geolite.blocks using gist(iprange); $$;
Archive Source Specification: FROM
Filename or HTTP URI where to load the data from. When given an HTTP URL the linked file will get downloaded locally before processing.
If the file is a zip file, the command line utility unzip is used to expand the archive into files in $TMPDIR, or /tmp if $TMPDIR is unset or set to a non-existing directory.
Then the following commands are used from the top level directory where the archive has been expanded.
Archive Sub Commands
command [ AND command … ]
A series of commands against the contents of the archive, at the moment only CSV,`’FIXED` and DBF commands are supported.
Note that commands are supporting the clause FROM FILENAME MATCHING which allows the pgloader command not to depend on the exact names of the archive directories.
The same clause can also be applied to several files with using the spelling FROM ALL FILENAMES MATCHING and a regular expression.
The whole matching clause must follow the following rule:FROM [ ALL FILENAMES | [ FIRST ] FILENAME ] MATCHING
Archive Final SQL Commands
SQL Queries to run once the data is loaded, such as CREATE INDEX.
|
OPCFW_CODE
|
Just a comment - one annoying thing I found last night when setting up Pitbull on my screen...
This is a product of frames having a default height, and then health/mana/etc. bars not having a set height, rather a variable (based on how high the entire frame is). I was setting up frames like party targets, party pets, etc. and I would remove cast bar, mana bar and then be left with this HUGE health bar that I needed to resize. And I almost thought of setting up raid frames, then realized that I would have had to go through the same hoops for those as well. The Buu layout needs a lot of resizing for each type of frame to make it work as well.
I hope that all made sense - not feeling 100% atm. :)
when i Target a friendly, the buffs/debuffs are on the Bottom of the frame like i like.
when i Target a hostile, the debuffs are to the Right of the frame.
how do i make all the buffs on my Target frame no matter the class/hostility be on the Bottom?
I am finding something similar with my buffs. Sometimes they will go to the right of the target frame even though I have them set at the bottom. If I go into the Aura configuration for Target frame and uncheck and recheck Split the frames reset and the buffs go back to the bottom of the frame.
for some reason Layout Positioning or Layout Settings isnt always showing up in my menu. doesnt have anythin to do with being in or out of combat either, i look for it and findit like once in 20 times. i can set it, and go back and it's disappeared immediately. really weird
anyway, iwth this Layout settings option, i moved the debuffs back to the bottom, like i twas before i updated.
If the party/raid/solo configuration dummy auras aren't going to be brought back to appear by default in configuration mode, could there at least be an option to toggle them for each frame?
If a toggle isn't going to be added, would it be possible for me to insert or edit some bit of the code so they'd at least appear on my copy? I'd just use an older version that had the dummy auras enabled, but I don't want to miss out on any of the great things that keep on being added (5 man raids displayed as a party? Handy!)
I tried PitBull yesterday and I really think it's a great addon with easy configuration and everything.
Previously I used ag_UnitFrames with my own layout, but I switched to Nurfed Unitframes because the text in the different layouts got messed up somehow. I'm not sure if it's an SVN problem, but it looks like text wasn't always messed up like that since there are some quite large spaces between some elements.
Anyway, I was wondering if there was any chance or possibility to get this fixed, or help me fix it if it's a client-side problem. I am not sure if I can make my own layout for PitBull yet, but if I can, I'd like to :)
I love these unit frames, although I've had some trouble with them.
1. Ghost frames - Although I have raid/raidtarget/focus/focustarget/pet/pettarget/partypet/partypettargets... etc... all hidden, and they are indeed not visual, there is a ghost type frame that when i click where that frame would be if it were shown, it acts like it's there. I often accidentally am switching my target in bgs and arenas, which is getting really annoying.
2. Raid frames - totally unusable as they show up all stacked on top of each other and cannot move them.
3. Mouse function - When I am in a raid, especially arena or bg, none of my mouse button's work. I cannot left click, right click, middle click, or use either of my thumb buttons, while in a raid.
I used ag before and did not have any of these problems and I've narrowed these down to pitbull by disabling all other mods installed. I really, really like these unitframes and they are coming together quite nice!! Keep it up! If anyone has any ideas on these problems let me know.
Quick Question I wanted to ask. I appologize in advanced if this is somewhere that I have not looked yet. I love the addon, just installed it yesterday and I am liking it a lot. One thing I need to know is, how do I add an experience bar to my player frame? Its kind of annoying not having one while I am leveling my alt, so yeah just asking :)
I like PitBull so far. But I see a problem in it. I think there's wasted space..
What I mean is there are elements on the frames which aren't being used to their potential. The addon seems to want to color the health bar for everything, from class to reaction. I know this can be turned off, but for those of us who want the display of such things without using the health bar or Dog Tags, we're sort of stuck.
I think if PitBull made use of the frame's border for the display of things like class color, difficulty and especially for pet happiness and aggro alers (although I realize PitBull_Banzai isn't part of the default addon) it would help a lot. Right now, the borders are sitting there doing absolutely nothing when they could be adding very useful warnings to the frame instead of wasting the health bar color, which is better to leave untouched as some people prefer to see it fade in color as health depletes.
Just a thought. It would make this addon user very happy to see aggro coloring and pet happiness on his frames' borders.
I don't know if this is a bug or not (you may be able to disable it from the options) but in addition to having my buffs / debuffs around my player frame they also still appear in default Blizzard UI position (top right). Is there a way to fix this? The only solution I have found is DLing a buff mod (Buffalo) and just hiding buffs / debuffs just so that it hides the Blizzard UI ones.
I set up a bunch of options for PitBull in the Default profile.
went to an alt, changed to use the Character's profile
then told it to Copy From Default profile and got this error:
PitBull-r32699\Portrait\Portrait.lua:90: attempt to call method 'SetTexCoord' (a nil value)
and then most of the unitframes went blank/weirdcolors. i did a ui reload and things SEEM to work now, and have the same settings that I wanted copied over from the Default profile. just wanted to let you know about this bug :)
Using Color health by class does nothing for pets (party pets and raid pets, maybe player pet too), maybe it could color them the same color as their owner ? (purple for warlock pets, green for hunter pets, blue for mage pets etc...)
I skimmed the 20 or so pages of this thread looking for an answer to my following question. If I missed the answer, forgive me.
Is there a way to give frames identifiable names? I have a click casting add-on and I want to be able to right-click a hostile focus frame and polymorph it. I don't want to override all the other unit frames though because I like the default menu that shows on right click. I don't care about any menu for the focus frame though, so this would be handy to re-sheep without losing target.
I saw that PitBull creates and recycles unit frames, so this may not be possible.
Just switched from aguf to Pitbull and like this one really.
One short question, i hope anyone can answer me:
With aguf i had in raids only 1 frame (groupfilter: 1,2,3,4,5,6,7,8) and this frame was sorted by class. I loved this because of having always a great quick overview of the raid.
It looked like this:
Is there any chance for me, to implement this in Pitbull?
|
OPCFW_CODE
|
[conspire] Re: 3.1_r0a i386 iso-cd files available on 2 DVD's
daniel at gimpelevich.san-francisco.ca.us
Fri Jun 17 18:16:56 PDT 2005
On Fri, 17 Jun 2005 16:00:24 -0700, Rick Moen wrote:
> Quoting jtav (jtav at indiatimes.com):
>> somebody can tell me how I can un-install linux and shrank the linux
>> two partitions to one, as was before?
> Jose, the easiest way to "un-install Linux" is to just install an
> operating system onto your hard disk and tell the installer to use
> all the space (or whatever portion of the space you have in mind).
> The wording of your question left it a bit unclear what your specific
> situation you're in, and thus what problem you're trying to solve.
> You might want to just bring the system to SVLUG's installfest in
> Mountain View, tomorrow. See: http://www.svlug.org/installfest/
José called me up and was able to provide some more details. It seems the
disk had two partitions before, one for Win2k and one for Win98. At the
recent CABAL, Peter replaced the Win98 one with a huge swap partition and
a Linux partition onto which he installed Knoppix. Apparently something
about the installation wasn't right, because it would start up and shut
down at a snail's pace even compared to running Knoppix from the CD on the
same machine. Rather than try to figure out what went wrong and fix it,
José opted to ditch the Linux that was installed. Here are the steps as I
outlined to him over the phone:
1) Boot Win2k and defrag the Win2k partition. This is to facilitate
resizing it if so desired.
2) Boot Knoppix from the CD.
3) Open a Terminal/Konsole window and type "sudo swapoff -a" to stop using
the swap partition that the Knoppix CD autodetected on the hard disk.
4) Choose "QTParted" from the appropriate menu in Knoppix.
5) Delete the swap partition and then the Linux partition.
6) Resize the remaining primary partition to the desired size.
7) Create a new logical partition to fill the remaining free space, and
format it as FAT32.
8) Save changes and exit.
9) Boot the desired Windows installer CD for installation to the new
partition, which should appear as the "D:" drive.
More information about the conspire
|
OPCFW_CODE
|
If you have problems with the getting started guide, note that there's a separate troubleshooting section for that.
Windows: missing winutils
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries
winutils.exe executable can not be found.
- Download hadoop winutils binaries (e.g https://github.com/cdarlint/winutils/archive/refs/heads/master.zip)
- Extract binaries for desired hadoop version into folder (e.g. hadoop-3.2.2\bin)
- Set HADOOP_HOME evironment variable (e.g. HADOOP_HOME=...\hadoop-3.2.2). Note that the binary files need to be located at %HADOOP_HOME%\bin!
- Add %HADOOP_HOME%\bin to PATH variable.
/tmp/hive is not writable
RuntimeException: Error while running command to get file permissions
%HADOOP_HOME%\bin and execute
winutils chmod 777 /tmp/hive.
Windows: winutils.exe is not working correctly
winutils.exe - System Error The code execution cannot proceed because MSVCR100.dll was not found. Reinstalling the program may fix this problem.
Other errors are also possible:
- Similar error message when double clicking on winutils.exe (Popup)
- Errors when providing a path to the configuration instead of a single configuration file
- ExitCodeException exitCode=-1073741515 when executing SDL even though everything ran without errors
Install VC++ Redistributable Package from Microsoft:
Java IllegalAccessError (Java 17)
Symptom: Starting an SDLB pipeline fails with the following exception:
java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$ (in unnamed module @0x343570b7) cannot access class sun.nio.ch.DirectBuffer (in module java.base) because module java.base does not export sun.nio.ch to unnamed module @0x343570b7
Java 17 is more restrictive regarding usage of module exports. Unfortunately Spark uses classes from unexported packages. Packages can be exported manually. To fix above exception add
--add-exports java.base/sun.nio.ch=ALL-UNNAMED to the java command line, see also Stackoverflow.
Resources not copied
Tests fail due to missing or outdated resources or the execution starts but can not find the feeds specified. IntelliJ might not copy the resource files to the target directory.
Execute the maven goal
mvn resources:resources) manually after you changed any resource file.
Maven compile error: tools.jar
Could not find artifact jdk.tools:jdk.tools:jar:1.7 at specified path ...
Hadoop/Spark has a dependency on the tools.jar file which is installed as part of the JDK installation.
- Your system does not have a JDK installed (only a JRE).
- Fix: Make sure a JDK is installed and your PATH and JAVA_HOME environment variables are pointing to the JDK installation.
- You are using a Java 9 JDK or higher. The tools.jar has been removed in JDK 9. See: https://openjdk.java.net/jeps/220
- Fix: Downgrade your JDK to Java 8.
How can I test Hadoop / HDFS locally ?
local:// URIs, file permissions on Windows, or certain actions, local Hadoop binaries are required.
- Download your desired Apache Hadoop binary release from https://hadoop.apache.org/releases.html.
- Extract the contents of the Hadoop distribution archive to a location of your choice, e.g.,
- Set the environment variable
- Windows only: Download a Hadoop winutils distribution corresponding to your Hadoop version from https://github.com/steveloughran/winutils (for newer Hadoop releases at: https://github.com/cdarlint/winutils) and extract the contents to
|
OPCFW_CODE
|
Sentential logic (also called propositional logic) is logic that includes sentence letters (A,B,C) and logical connectives, but not quantifiers. The semantics of sentential logic uses truth assignments to the letters to determine whether a compound propositional sentence is true.
- 4.1: Why Another Deductive Logic?
- In his own time, in ancient Greece, Aristotle’s system had a rival—the logic of the Stoic school, culminating in the work of Chrysippus. Recall, for Aristotle, the fundamental logical unit was the class; and since terms pick out classes, his logic is often referred to as a “term logic”. For the Stoics, the fundamental logical unit was the proposition; since sentences pick out propositions, we could call this a “sentential logic”.
- 4.2: Syntax of Sentential Logic
- First, we cover syntax. This discussion will give us some clues as to the relationship between Sentential Logic and English, but a full accounting of that relationship will have to wait, as we said, for the discussion of semantics.
- 4.3: Semantics of Sentential Logic
- While the semantics for a natural language like English is complicated (What is the meaning of a sentence? Its truth-conditions? The proposition expressed? Are those two things the same? Is it something else entirely? Ugh.), the semantics for SL sentences is simple: all we care about is truth-value. A sentence in SL can have one of two semantic values: true or false. That’s it.
- 4.4: Translating from English to Sentential Logic
- In real life, though, we’re not interested in evaluating arguments in some artificial language; we’re interested in evaluating arguments presented in natural languages like English. So in order for our evaluative procedure of SL argument to have any real-world significance, we need to show how SL arguments can be fair representations of natural-language counterparts. We need to show how to translate sentences in English into Sentential Logic.
- 4.5: Testing the Validity of Sentential Logic
- Having dealt with the task of taming natural language, we are finally in a position to complete the second and third steps of building a logic: defining logical form and developing a test for validity. The test will involve applying skills that we’ve already learned: setting up truth tables and computing the truth-values of compounds. First, we must define logical form in SL.
Thumbnail: Chrysippus was a member of the Stoic school of philosophy and believed the fundamental logical unit was the propositio, which is the basis of “sentential logic”. Bust of Chrysippus, Uffizi Gallery, Florence (CC BY-SA 4.0; Livioandronico2013 via Wikipedia).
|
OPCFW_CODE
|
Table of Contents
Are you tired of dealing with duplicate transactions or lost data in your cryptocurrency network? Do you wish there was a way to ensure that each node on the network has a unique identifier to prevent these issues?
Look no further than Crypto Node UUIDs. These universally unique identifiers are generated specifically for cryptocurrency nodes and can help solve many of the problems associated with identifying and tracking transactions.
By implementing Crypto Node UUIDs, you can be confident that each node on your blockchain network is easily identifiable and distinguishable from others. This not only helps prevent errors but also improves the overall efficiency of your system.
With this technology, you can rest assured that all transactions are correctly processed and tracked without any confusion or loss of information. So why wait? Learn more about how Crypto Node UUIDs work and how they could benefit your cryptocurrency network today!
- Crypto Node UUIDs are unique identifiers used to prevent duplicate transactions and lost data in cryptocurrency nodes.
- The use of cryptographic hashing algorithms and random number generation ensures each node is easily identifiable and distinguishable, improving efficiency and security.
- Proper node identification and validation is crucial in maintaining network security and preventing issues such as double-spending or data manipulation within the blockchain ledger.
- Implementing crypto node UUIDs allows for easy identification and tracking of nodes within a network, as well as various use cases such as transaction tracing and data analysis.
The Need for Unique Identifiers in Blockchain Networks
Without unique identifiers, blockchain networks would be like a chaotic sea of indistinguishable transactions, making it impossible to keep track of anything! Imagine trying to find one specific transaction among millions without any way to differentiate it from the rest.
This is why universally unique identifiers (UUIDs) are crucial in cryptocurrency nodes. The importance of decentralization in blockchain networks cannot be overstated. Without a centralized authority controlling the network, there needs to be a way for each node to identify and verify transactions.
UUIDs provide this necessary functionality by creating a unique identifier for each transaction that can be easily tracked and verified across the entire network. Additionally, UUIDs have use cases beyond cryptocurrency, such as tracking supply chain logistics or securing data storage on decentralized networks.
What are Crypto Node UUIDs?
You might be wondering what makes your cryptocurrency transactions and interactions within the network so secure and distinct from others. Well, one of the factors that contribute to this is the use of Crypto Node UUIDs.
These are Universally Unique Identifiers that are generated by nodes in a blockchain network to identify themselves. UUID implementation is not something new as it has been used in other industries like software development for identifying data objects and hardware components.
However, its use in blockchain networks has proven to be more effective compared to traditional identification methods. This is because a Crypto Node UUID cannot be duplicated or hacked thus ensuring that every node on the network is unique and trustworthy.
It also ensures that there’s no interference with the transactions made on the network making it highly secure.
How are Crypto Node UUIDs Generated?
To generate Crypto Node UUIDs, you’ll need to understand how cryptographic hashing algorithms work. These algorithms are used to convert any input data into a fixed-size string of characters that is unique and cannot be reversed.
Random number generation is also important when generating these UUIDs, as they increase the level of uniqueness in the node identification process. By combining these two techniques, you can generate universally unique identifiers for your cryptocurrency nodes.
Cryptographic Hashing Algorithms
Using cryptographic hashing algorithms is essential for generating secure and unique identifiers in your cryptocurrency node. These algorithms are designed to take any input data, such as a string or file, and produce a fixed-length output known as a hash value.
Here are some important things you should know about these algorithms:
Cryptographic hashing algorithms use salting techniques to add additional random data to the input before generating the hash value. This makes it harder for attackers to guess the original input and generate the same hash value.
Collision resistance is also an important property of cryptographic hashing algorithms. This means that it’s extremely difficult for two different inputs to produce the same hash value.
Some popular cryptographic hashing algorithms used in cryptocurrency nodes include SHA-256, RIPEMD-160, and Scrypt.
When using these algorithms in your node, it’s important to ensure that they’re implemented correctly and securely to prevent any potential vulnerabilities or attacks on your system.
Random Number Generation
Random number generation is crucial for ensuring the security and unpredictability of various cryptographic processes. In the context of cryptocurrency nodes, generating random numbers is especially important when creating universally unique identifiers (UUIDs) for transactions or blocks.
Pseudorandom numbers, which are generated using mathematical algorithms, can provide a good level of randomness but can also be predictable if the algorithm used is known. This means that an attacker could potentially guess the values generated by a pseudorandom number generator and compromise the security of a system.
To increase randomness and reduce predictability, entropy sources can be used to generate truly random numbers. These sources include physical phenomena like atmospheric noise or radioactive decay, which are inherently unpredictable and cannot be easily replicated by an attacker. Cryptocurrency nodes often use multiple entropy sources to ensure that UUIDs are as unpredictable as possible.
However, it’s important to note that even with strong entropy sources, there is always some level of uncertainty in any random number generation process.
Now that you have an understanding of random number generation, let’s move on to the current subtopic: node identification.
As a cryptocurrency node operator, it’s important to generate and maintain a universally unique identifier (UUID) for your node. This UUID serves as a form of node validation, ensuring that other nodes within the network can properly identify and communicate with your node.
Node identification plays a crucial role in maintaining network security. By generating a unique UUID for your node, you’re minimizing the risk of potential attacks or malicious activity from unauthorized nodes within the network.
Additionally, having a proper system in place for validating nodes can help prevent issues such as double-spending or data manipulation within the blockchain ledger.
Overall, taking the necessary steps to ensure proper node identification and validation is essential for maintaining a secure and stable cryptocurrency network.
The Benefits of Crypto Node UUIDs
You’ll quickly discover the numerous benefits of incorporating crypto node UUIDs into your system.
One major advantage is that it allows for easy identification and tracking of nodes within a network. This means that you can easily monitor the performance of each node, detect any issues or errors, and even identify potential security breaches.
Another benefit is that crypto node UUIDs can be used in various use cases, such as transaction tracing and data analysis. By using UUIDs to track transactions, you can ensure that every transaction is accounted for and easily traceable if needed. Additionally, analyzing data with UUIDs allows for more accurate insights into user behavior and trends within the network.
Overall, implementing crypto node UUIDs can greatly enhance the functionality and security of your cryptocurrency system.
Future Implications and Integration of Crypto Node UUIDs
As you integrate UUIDs into your system, consider the future implications they may have on the security and efficiency of your network. With the rise of decentralized identity solutions, UUIDs can play a significant role in establishing secure and unique identities for users within cryptocurrency networks.
By utilizing UUIDs, users can authenticate their transactions without revealing sensitive personal information, creating a more private and secure transaction process. In addition to enhancing security, UUIDs also have potential implications in scalability solutions for cryptocurrency networks.
As blockchain technology continues to grow and expand, it’s imperative that networks are able to handle increasing numbers of transactions without sacrificing speed or efficiency. By implementing UUIDs as part of scalability solutions, networks can increase transaction speeds while maintaining data integrity and security.
Furthermore, with the ability to generate universally unique identifiers at scale, it becomes easier to track and manage large volumes of transactions within complex systems.
Frequently Asked Questions
Are Crypto Node UUIDs compatible with all types of blockchain networks?
To ensure UUID compatibility, it’s important to understand how different nodes generate UUIDs in cryptocurrency networks. Check if the blockchain network supports crypto node UUIDs before implementing them.
Can multiple nodes generate the same UUID?
If multiple nodes generate the same UUID, it’s called a UUID collision and can negatively impact network performance by causing conflicts. It’s crucial to ensure that each node generates a unique identifier.
How do Crypto Node UUIDs prevent fraud or malicious activity on the blockchain network?
To prevent double spending, crypto node UUIDs are implemented in decentralized finance applications. They ensure each transaction is unique and cannot be replicated, reducing the risk of fraud or malicious activity on the blockchain network.
Can Crypto Node UUIDs be used to track transactions or user activity on the network?
UUIDs in crypto nodes can be used to track transactions and user activity, compromising anonymity. There are regulatory considerations for using UUIDs for compliance and surveillance purposes, raising privacy concerns for users on the network.
Are there any potential drawbacks or limitations to using Crypto Node UUIDs in blockchain networks?
When using crypto node UUIDs in blockchain networks, there are potential downsides such as security concerns and implementation challenges. Trade-offs may include sacrificing performance impact for increased security measures or vice versa. Additionally, limitations may arise due to the complexity of generating unique IDs.
So there you have it, the importance of universally unique identifiers in cryptocurrency nodes and how they’re generated.
As blockchain technology continues to advance and be implemented in various industries, the need for reliable identification becomes more crucial. With crypto node UUIDs, users can ensure that their transactions are secure and verifiable, leading to greater trust in the network as a whole.
Looking towards the future, it’s likely that crypto node UUIDs will become even more integrated into blockchain networks as a standard practice. As the industry grows and evolves, identifying each node accurately and uniquely will remain a critical component of maintaining a secure and transparent system.
So keep an eye out for advancements in this space and remember the importance of choosing a strong UUID for your own cryptocurrency nodes!
|
OPCFW_CODE
|
I have to execute an INSERT INTO query using mysql or mysql2 in node.js The values passed into the query utilizes a spatial geometry function ST_GeomFromGeoJSON() on geoMap column which is of type GEOMETRY. Here is the simplified code: The above code does not work and throws the error Cannot get geometry object from data you send to the GEOMETRY
Insert with Foreign Key Contraints
In the above code, I want to add the distinct values from column make from vehicles into column makeID in makeModel but I get the error INSERT INTO makeModel (makeID) SELECT DISTINCT (make) FROM vehicle Error Code: 1452. Cannot add or update a child row: a foreign key constraint fails (`dealership`.`makemodel`, CONSTRAINT `makemodel_ibfk_2` FOREIGN KEY (`modelID`) REFERENCES `model` (`ID`)) Answer
strapi with mysql running simple raw sql
Need to run this query in my test and unable to find the right syntax. This is no postgresql and mysql does not like hyphens in table names. Please suggest. Answer Simply don’t use not allowed characters, it makes the life much more easy. The backticks around the String, where implemented, so that theuser dind’t need to escapee single and
SQL query optimization for speed
So I was working on the problem of optimizing the following query I have already optimized this to the fullest from my side can this be further optimized? Answer Your query although joined ok, is an overall bloat. You are using the dim_ad_type table on the outside, just to make sure it exists on the inside as well. You have
How to select a nested item of json datatype?
I have a MySQL table named computers with a column named details that is json data type formatted. I’ve inserted a value like this in that column: I can simple get Chrome value by the following query: Now I want to know, how can I get 1680 value (which is the value of x) ? Answer You can alter your
How to use INNER LEFT instead of EXCEPT?
I read that MySQL does not support EXCEPT, and the workaround is to use LEFT JOIN. THIS IS MY QUERY: Basically: Trying to find out the manufacturers that sell PCs but not laptops. How can I convert this query with the LEFT JOIN? I got confused.. Table Computers: Table Manufacturers Computers: So since Manufacturer ID number 1 sells Laptop the
SQL count() not showing values of 0
i just started my journey with SQL, and made some tables of Cyclists, and Cycling Teams. Cyclist’s table contains columns: ID, Name, Team (which is foreign key of TEAMS ID) Team’s table contains columns: ID, Name, Number of Cyclists I want to Count number of Cyclists in each team, by using count() function ( Or basically any function, i just
How to push value selected from db as first if it is equal with value in other column?
I am doing autocomplete query system for cities and I have problem, that I want to prioritize main city of region from another results showed for exact phrase for ex. “%Trenc%”. If city and region are the same, I would like to place it as first result and then everything other. Trencin Trenc My idea is create another column in
Is such a result possible with a query from an SQL database?
I want to fire a query to get such a result: Tables Schema I guess it’s not possible like that? I don’t have any experience with text-based databases, but I can well imagine that this can be achieved with a MongoDB. Because ultimately I want to have a js object at the end of the day. Answer Here’s an example
Calculating Value Count and Percentage
I have a table Currently Enrolled The table is basically to get an idea of how many supporters, undecided, and opposition they were. Then once I get the count I wanted to then do another calculation to find out what that percentage was. Essentially what I want to be able to do is: Count the total number of supporters: SELECT
|
OPCFW_CODE
|
- Suggestion of a base set of packages to provide security fixes.
- theraven suggests branching will help with security fixes.
- bapt says we'll look at this after we have some experience with pkg sets.
- proposal for large scale renaming and splitting of ports categories to make it easier for users to find things
- Significant costs associated with moving most packages in svn
- discussion of alternative metadata to provide similar views
- discussion of issues of naming packages in a work without categories
- Further discussion of how to deal with things like llvm and subversion that have multiple versions
ports with clang and libc++
- Things largely work with clang, mostly ignoring CC, CXX, etc
- No exp run yet for libc++. Cross threading of C++ std libraries will often be a problem that we're already hitting with gcc46.
provides / requires
- implemented in next version of pkgng
- Ports declare things like: "I provide an http server" or "I need a web server with php"
Allow ports to depend on either libjpeg or libjpegturbo, on perl > 5.12, etc
- Jonathan raises the issue of C++ libraries with/without RTTI
- Flavors suggested as a solution, but not clear everything required is supported
- Forcing rpaths or adding per-port /usr/local/etc/libmap.d (coming to HEAD soon) or similar
cross building ports
- bapt says: pkgsrc cross build mostly don't work
- sson's qemu-usermode is the future. mips64 working, arm nearly there.
compiler selection infrastructure
- Allow user to pick their favorite default compiler
- Porter should be able to specify the supported compiler(s) for their port
- Brooks requests the ability to specify an external compiler as the default ports compiler
- bapt says we could add the compiler used as a package annotation so we can examine that in a pkg set.
- New solver requires a new INDEX format
- A format should be quickly and incrementally updatable with data from OPTIONS changes.
- One suggestion is using the pkg repo format
keywords for plist
- pkg supports all old @keyword values
- Deprecates @exec and @unexec
Centralized keywords stored in YAML format in /usr/ports/Keywords/keyword.yaml
- Lots of discussion about when the keywords should be expanded
- Adding users is a specific example of something the packager can't actually know how to do this for a given system.
- required for sub-packages
supports -R /root/path
ports as user
- Good idea. Someone should work on this. (Staging directory required)
- SAT solver for "requires" support
- Support arguments to ssh to allow multiple administrators to access a shared repo
- support for command aliases
killing shell scripts
- They should die. Keywords are on the path there.
|
OPCFW_CODE
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Sep 4 10:09:35 2019
@author: DavidFelipe
"""
try:
import os
import numpy as np
import cv2
import yaml
import progressbar
import datetime
except:
print(" PLEASE REVIEW THE MODULES THAT NEEDS THE SOFTWARE - AN ERROR WAS OCCURRED")
class Postprocessing:
def __init__(self,image,vector_boxes, draw_flag=False):
"""
Postprocessing:
Collection of tools to perform the last modifications to vision
computer neural network applications
Input:
image - image processed by the neural network
vector_boxes - vector of coordinates of the object detected boxes
"""
self.widgets = [progressbar.Percentage(),
' ', progressbar.Bar(),
' ', progressbar.ETA()]
boxes_format = np.array([0,0,0,0,0])
self.vector_points = np.array([0,0])
vector_boxes = np.delete(vector_boxes, [0], axis=0)
self.config_file()
point_image = image
box_image = image
if len(vector_boxes > 2):
print(" &&& Postprocessing change results format")
bar = progressbar.ProgressBar(widgets=self.widgets, maxval=len(vector_boxes)-1)
bar.start()
## Pass (x,y,w,h) to (x1,y1,x2,y2)
for item, box in enumerate(vector_boxes):
xmin = box[1]
ymin = box[0]
xmax = box[1] + box[3]
ymax = box[0] + box[2]
box_f = np.array([xmin,ymin,xmax,ymax,box[4]])
boxes_format = np.vstack((boxes_format, box_f))
bar.update(item)
bar.update(len(vector_boxes)-1)
score = boxes_format[:,4]
boxes = (boxes_format[:,:4]).astype(np.int)
self.bboxes_after_nms, self.scores_nms = self.NMS_process(boxes,score, self.iou_threshold)
#self.image_drawed = self.Draw_results(box_image, self.bboxes_after_nms)
#self.Count_points(point_image, self.bboxes_after_nms)
else:
self.image_drawed = point_image
self.counter = 0
self.container = [image, image]
def Draw_results(self, image, boxes, mask=0):
"""
Draw_results:
Function to draw the boxes provided by the neural network
Also for create a mask with the center points of the boxes
Input:
image - Original image processed
boxes - vector of coordinates of the boxes found ((Xtopl, Yttopl),(Xbottomr, Ybottomr))
mask - (0,1) determine the return object
Output:
if mask = 0 - Return Image drawed with boxes
if mask = 1 - Return Mask image with white center points
"""
if mask==0:
for bbox in boxes:
top_left = bbox[0],bbox[1]
bottom_right = bbox[2],bbox[3]
cv2.rectangle(image,top_left, bottom_right,(255, 0, 0), 2)
return [image]
else:
mask_image = np.zeros_like(image, dtype=np.uint8)
for bbox in boxes:
cx = int(bbox[0] + (bbox[2] - bbox[0])/2)
cy = int(bbox[1] + (bbox[3] - bbox[1])/2)
point = np.array([cx,cy])
self.vector_points = np.vstack((self.vector_points, point))
bottom_right = bbox[2],bbox[3]
cv2.circle(mask_image, (cx, cy), self.radio_mask, (255, 255, 255), -1) #6
cv2.circle(image, (cx, cy), self.radio_ext, (255, 255, 255), 2) #10
cv2.circle(image, (cx, cy), self.radio_im, (0, 0, 255), -1) #4
self.vector_points = np.delete(self.vector_points , [0], axis=0)
return [image, mask_image[:,:,0]]
def Count_points(self,image,boxes):
"""
Count_points:
Fucntion to draw the centroid of the boxes and count the objects in base of this geometric shape
Input:
image - original image
boxes - vector of boxes already filtered
Output:
image_drawed - image with the geometric shapes
counter - number of objects counted
(container[1]).astype('uint8')
"""
self.counter = 0
self.container = self.Draw_results(image, boxes, mask=1)
mask = self.container[1]
try:
_, contours,hierachy = cv2.findContours(mask.astype("uint8"), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
except:
contours,hierachy = cv2.findContours(mask.astype("uint8"), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for (i, contour) in enumerate(contours):
self.counter += 1
def NMS_process(self,bboxes,psocres,threshold):
'''
NON-MAX-SUPRESSION
NMS: first sort the bboxes by scores ,
keep the bbox with highest score as reference,
iterate through all other bboxes,
calculate Intersection Over Union (IOU) between reference bbox and other bbox
if iou is greater than threshold,then discard the bbox and continue.
Input:
bboxes(numpy array of tuples) : Bounding Box Proposals in the format (x_min,y_min,x_max,y_max).
pscores(numpy array of floats) : confidance scores for each bbox in bboxes.
threshold(float): Overlapping threshold above which proposals will be discarded.
Output:
filtered_bboxes(numpy array) :selected bboxes for which IOU is less than threshold.
'''
print(" &&& Postprocessing NON-MAX-SUPRESSION")
#Unstacking Bounding Box Coordinates
bboxes = bboxes.astype('float')
x_min = bboxes[:,0]
y_min = bboxes[:,1]
x_max = bboxes[:,2]
y_max = bboxes[:,3]
#Sorting the pscores in descending order and keeping respective indices.
sorted_idx = psocres.argsort()[::-1]
#Calculating areas of all bboxes.Adding 1 to the side values to avoid zero area bboxes.
bbox_areas = (x_max-x_min+1)*(y_max-y_min+1)
#list to keep filtered bboxes.
filtered = []
counter = 0
bar = progressbar.ProgressBar(widgets=self.widgets, maxval=len(sorted_idx))
bar.start()
while len(sorted_idx) > 0:
#Keeping highest pscore bbox as reference.
rbbox_i = sorted_idx[0]
#Appending the reference bbox index to filtered list.
filtered.append(rbbox_i)
#Calculating (xmin,ymin,xmax,ymax) coordinates of all bboxes w.r.t to reference bbox
overlap_xmins = np.maximum(x_min[rbbox_i],x_min[sorted_idx[1:]])
overlap_ymins = np.maximum(y_min[rbbox_i],y_min[sorted_idx[1:]])
overlap_xmaxs = np.minimum(x_max[rbbox_i],x_max[sorted_idx[1:]])
overlap_ymaxs = np.minimum(y_max[rbbox_i],y_max[sorted_idx[1:]])
#Calculating overlap bbox widths,heights and there by areas.
overlap_widths = np.maximum(0,(overlap_xmaxs-overlap_xmins+1))
overlap_heights = np.maximum(0,(overlap_ymaxs-overlap_ymins+1))
overlap_areas = overlap_widths*overlap_heights
#Calculating IOUs for all bboxes except reference bbox
ious = overlap_areas/(bbox_areas[rbbox_i]+bbox_areas[sorted_idx[1:]]-overlap_areas)
#select indices for which IOU is greather than threshold
delete_idx = np.where(ious > threshold)[0]+1
delete_idx = np.concatenate(([0],delete_idx))
#delete the above indices
sorted_idx = np.delete(sorted_idx,delete_idx)
counter += 1
bar.update(counter)
#Return filtered bboxes
return bboxes[filtered].astype('int'), psocres[filtered] #186
def extract_data(self, image, boxes, scores, threshold, name, save_path):
"""
extract_data - Function to extract data from the working process
Input :
- image : source image
- boxes : numpy array with shape [m,x1,y1,x2,y2]
- scores : Predicted scores for each box
- threshold : threshold filter float
- path : folder to save the data
Output :
- Save data in folder given
ALWAYS THE FIRST CLASS IS THE MAIN OBJECT TO DETECT
"""
date = datetime.datetime.now()
timestamp = str(date.day)+str(date.hour)+str(date.minute)
counter = 0
for idx, box in enumerate(boxes):
if scores[idx] >= threshold:
image_substracted = image[box[1]:box[3], box[0]:box[2], :]
name_image = "D" + str(idx) + "_" + name + timestamp + ".jpg"
path = os.path.join(save_path, name_image)
try:
cv2.imwrite(path, image_substracted)
counter+=1
except :
image_substracted.shape
print("Error saving : " + str(image_substracted.shape))
print("Saved images " + str(counter))
def config_file(self, path="./"):
"""
config_file:
Reserved function for parameters configuration using yaml files
Input:
path - location of the configuration yaml file
"""
with open(os.path.join(path, "config.yml"), 'r') as ymlfile:
config_file = yaml.load(ymlfile, Loader=yaml.FullLoader)
postprocessing = config_file['Postprocessing']
self.radio_mask = postprocessing["radio_mask"]
self.radio_im = postprocessing["radio_image_fill"]
self.radio_ext = postprocessing["radio_image_ext"]
self.iou_threshold = postprocessing["iou_threshold"]
|
STACK_EDU
|
Should the following queries not produce the same result? (MySQL)
Assuming the following MySQL table structure, why do the two following queries produce different results?
games(id) (464 records)
members(id) (1 record, id=351)
gameslists(id,memberid,gameid) -- (2 records, (1,351,1) and (2,351,2))
This produces null
SELECT games.*
FROM games
INNER JOIN gameslists ON gameslists.gameid = games.id
WHERE gameslists.memberid <> 351 AND gameslists.id is NULL
This produces 462 records, which is what I expect.
SELECT games.*
FROM games
LEFT JOIN gameslists ON gameslists.gameid = games.id AND gameslists.memberid <> 351
WHERE gameslists.id is NULL
The expression (gameslists.id is NULL) can never be true in the INNER JOIN query (assuming id is the primary key). That's why the first result set contains no rows.
On the other hand, whenever the ON clause of the LEFT JOIN does not match, the gameslists fields will be NULL for that particular row. Therefore your second query will return all the games that do not appear in gameslists, unless memberid is 351.
Not, an error, but also nulls: LEFT JOIN gameslists ON gameslists.gameid = games.id
WHERE gameslists.id is NULL AND gameslists.memberid <> 352
@Mel: What is the intended result? You want all games that do not appear in gameslists? And what about memberid?
The intended result is achieved in my second query (that returns 462 results; I want to select all games that do not exist in gameslists given a certain memberid); I would like to know if I can achieve the same thing by moving the "AND gameslists.memberid = ???" clause from the JOIN part to the WHERE part of the query. Why? They framework I'm using will not allow me to have two arguments in the JOIN clause (it will join gameslists.gameid = games.id, but I can't have the "AND gameslists.memberid" clause at the same time.
@Mel: What I can't understand is the memberid field. Because you're selecting all games that do not exist in gameslist, and memberid is a field in gameslist. Therefore by definition, all games that do not have an entry in gameslist will not have a memberid field.
The reason for memberid is this: Say a member has added two games. I want to eliminate those two games for him only, hence the condition needs to be double not (gameslists.gameid = games.id AND gamelists.memberid <> session.member.id)... if I don't have memberid as a filter, and all the games are added to lists, no one will be able to see anything any more!
INNER JOIN returns non NULL matches, whereas LEFT JOIN can be NULL on one side. I think this is the clue.
|
STACK_EXCHANGE
|
Cannot have two HTML5 backends at the same time
I'm trying to use my own component that has react-dnd as a dependency, and meanwhile, I also use ory-editor as the rich-text editor in my app. In this case, there are two HTML5 backends at the same time and I got the problem. Maybe I need to define DragDropContext on common parent of my own component and ory-editor, but how can I get the ory-editor without DragDropContext? What would be the best way to fix this?
The best way to fix this would be to pass the backend via the config array during instantiation, and if nothing is passed, use the htmlbackend as default. This is currently not supported and would require a PR. The soonest I can look at it is in 3-4 weeks unfortunately. :(
Am 18.07.2017 um 06:51 schrieb EthanWong<EMAIL_ADDRESS>I'm trying to use my own component that hasreact-dnd as a dependency, and meanwhile, I also use ory-editor as the rich-text editor in my app. In this case, there are two HTML5 backends at the same time and I got the problem. Maybe I need to define DragDropContext on common parent of my own component andory-editor, but how can I get the ory-editor without DragDropContext? What would be the best way to fix this?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
+1 for this also
how can i pass the backend via config? can you please give me an example? thanks
Pass it to new Editor: https://github.com/ory/editor/blob/master/packages/core/src/index.js#L64
Hi @arekkas, I am using Ory and React-Source-Tree in my app, and if possible, I would love a little bit more context on your solution and how to apply it, because I'm still getting the multiple backend error.
gcorne's solution makes sense to me (https://github.com/react-dnd/react-dnd/issues/186) because he's creating one instance of the dragAndDrop Context export default DragDropContext(HTML5Backend) and uses that HOC to decorate his components and pass the context.
When I look at Ory's source code for the editor, the backend is passed to the constructor as so this.dragDropContext = dragDropContext(dragDropBackend || dndBackend)
.So this is the implementation I understand that is possible (and please let me know if I'm making a mistake).
plugins: editorPlugin,
editables: [createEmptyState()],
dragDropBackend: HTML5Backend,
})
However, does that solve the issue? It seems to me that no matter what, we are still creating a new context for dnd? Or perhaps there is something about the backend that I pass that I'm misunderstanding (because in this example, I'm strictly repeating dndBackend, the fallback dragDropBacken).
Thanks for your help!
When you import the backend, it creates a new instance which is probably causing your error. Make sure you import HTML5Backend only once.
Internally, ory-editor is importing the backend. Isn't it causing the "Cannot have two HTML5 backends at the same time" error?
Is there any other way so that we can implement reactdnd for our other feature, along with ory-editor?
I used editor with react-sortable-tree. Any suggestions?
I have the same problem? How to resolve it?
When you import the backend, it creates a new instance which is probably causing your error. Make sure you import HTML5Backend only once.
actually, it exports a function that creates a new backend. Maybe you refering to an older version? Also the version used in ory editor looks quite old, maybe an upgrade would help?
|
GITHUB_ARCHIVE
|
Specify CPU frequency as a kernel CMD_LINE parameter of Linux on boot?
How can I check whether I use CPU or GPU in TensorFlow?
What is a difference between CPU threads and program threads
What kind of address instruction does the x86 cpu have?
Getting the wrong number of CPUs from cpu_count with os and multiprocessign modules
accurate system cpu usage in windows
Does /proc/cpuinfo give an updated CPU frequency value each time you access the file?
MySQL Workbench do not use all the cpu cores
Kubernetes CPU multithreading
Query System Information Using Python in Windows 10
Is there a way to reduce the high cpu share when I update the text of many label controls?
CPU caching understanding
monitor CPU and battery
CPU observed speed (in Ghz)
Using Java (JVM) application to run on dual CPU socket (Win10)
Is there any way to use android emulator for AMD CPU?
Why Tensorflow Op kernel using both GPU & CPU
Atom IDE overheating on MacBook pro
Is there a limit on the number of hugepage entries that can be stored in the TLB
Powershell or other script to get Windows Server CPU usage by user
runtime._ExternalCode Cpu usage is too high, Up to 80%
Best CPU for image processing
CPU architecture: Core vs. ALU
Why schedule threads between cpu cores expensive?
what cpu instructions (for any architecture) do not do math
Do multiple processes may run in parallel on a multi thread single core CPU?
Aparapi: not invoking GPU
Which is better for game development? i7 8700K or i7 9700K
CPU min threshold set in windows, windows deleted settings are still there
PyTorch out of GPU memory just after initialization
WHY Single CORE is taking over all CPU load in LINUX
Why is clock cycle time the inverse of clock rate?
Impact of multi-word instruction
If MIPS architecture had 64 registers instead of 34 registers
For computational problems that do no I/O and access no shared data, Ncpu + 1 threads yields optimal throughput?
IIS website slow in response when calculation in progress
RAM CPU and visitors of hosting is this fit for movie website?
What are registers in a computer? Is it same as a SRAM? What is the actual use of register?
Why would /proc/stat show different CPU utilization than top?
keras not using gpu but tensorflow is
Appropriate power meter for CPU and GPU power consumption?
my is cpu overheating after reinstalling it
Smaller transistors means luck of credibility?
Instruction cache vs Data cache
Computing power to run a python script
Read file block by block with timer
What happens if a word truncates an int?
user top&htop commands to check the usage of CPU by a container
Does the compiler actually produce Machine Code?
Outsourcing CPU workload to a remote machine?
How am I getting 0 clock ticks when measuring latency using rdtscp instruction?
Cache Line Format/Layout
EC2 Memory Issue RStudio
What command should I use in centOS to reduce the number of available cores in my system?
WordPress: External call to wp-load.php causes CPU and memory spikes
What all information one need to know to understand a micro-controller architecture?
CPU scheduling in os
Hadoop processing time in datanode
Hardware for Deep Learning
Windows Application Performance: CPU utilization
NODE.JS App gets stuck getting 2.5 million of records through and api
Nginx + php-fpm high cpu usage
The actual differences between CPU's physical memory and RAM
Exception raised in pipeline CPU stage
C++ routine sent, process activated, on the way to lose my CPU, please help, May Day
How do I make a given code "core friendly?"
Why does Python not use 100% of the processor?
Ubuntu CPU utilization 100% even when RAM is at 25% and cache is cleared
Why does HashMap resize() again, when specifying a precise capacity?
Maximizing CPU frequency (ubuntu, intel xeon)
Variable size in 32 bit or 64 bit processor
How to interpret values from zsim .out file?
setting cpu affinity for all services to use one core in batch file (windows 10 home)
What's the purpose of clocked registers in pipelined processor
A better way to make pytorch code agnostic to running on a CPU or GPU?
Is the GPU faster at iterative code than the CPU?
How cpu is aware of hardware interrupt?
cpu usage is high under battery power ubuntu 18.04
cortexa7 CPU(s) took too long time to execute a loop compared to cortexa15 CPU(s)
linux -- relinquishing your CPU time slice programatically in user mode
Why cpu segmentation works the way it does?(Real mode)
What are the drawbacks of Intel's U-Series Processors?
What is the average processor of all the computers online at any given time?
Android CPU frequency not at minimum when powersave governor activated?
Selecting OpenCL CPU platform in Compute.scala
Is it faster to run a vector dot product using int32_t instead of a double?
Tensorflow Cpu optimization with Docker for object detection?
HOW TO FORCEFULLY DISABLE intel_pstate? intel_pstate is enabled on reboot even with intel_pstate=disable option in grub
Keras - How to run loaded model on CPU
Fallback input for ffmpeg
Scaling Apache/PHP is as simple as maximizing the number of CPU cores?
How to know if the CPU use linear address or physical address to index the Ln data cache?
Why core count does not scale with core size?
How to determine device ID for AMD Ryzen 5 2600 CPU
Are there wait states with in and out instructions?
how to write assembly language to make DMA works
How do I calculate CPU time? The units on my answer are off
Shared read resources across threads
Wordpress use 100% CPU in public_html
ImportError Tensorflow on OS X
|
OPCFW_CODE
|
Every day I wake up at 3am. If it’s a weekend, however, I go to sleep at 3am. I brush my teeth, go for a run, browse Reddit and listen to Eminem. Some elderly people from nearby residences often join me for their morning walk. To ensure that I keep myself socially distant, I minutely change my trajectory as long as I’m within a few feet of them.
What you read was a simple paragraph. Yet, it contained the semantics of a basic programming language. If you stare at it hard enough, you’ll discover simple recursions, conditionals and even data types such as arrays of strings. No wonder why High-Level Programming Languages attempt to resemble English. For English itself, kinda imperfectly, can be spoken very declaratively. And in case you were wondering, nope, the first paragraph isn’t an accurate description of my life.
One might say that spoken languages simply consist of strings. That is true, given that we rarely employ methods to completely segregate parts of sentences, but even if you observe this very sentence, you’ll appreciate clauses which bring order into relatively chaotic statements.
You won’t find a literal dictionary in English (and I use English as a representative, it might as well be any spoken language), however, arrays and variables are quite abundant. Every time someone mentions a list of things all segregated by commas or ‘ors’ and ‘ands’, they are, in a very loose way. A lot of the words in English are used as variables which momentarily refer to the part of the information we are interested in. ‘This’ and ‘that’ and the pronouns all turn out to be variables in that sense.
Logical Operations and Conditionals
Is this sentence true? This sentence is true. Well, I don’t know for sure, but that initial ‘is’ is doing a lot of work here. Single-handedly by jumping from the first (programmers read zeroth) place to the third (programmers read second), it turned the sentence from an interrogative into a declaration.
Similarly in our speech, we are time and again using conditionals with such operators. From “if it rains” to “if I succeed”, these operations may not always be logical but surely approximates an enquiry into nature followed by conditional statements. With ‘ifs’ and ‘thans’ we have developed a pretty well-equipped arsenal of conditionals and operators that
You can say, I was bla years old when the millennium turned and that day I took a bath, next day I took a bath, next I took a bath, next I took a bath, next I took a bath…, or you might just say I have taken a bath daily since the turn of the millennia. Recursions and been easily encoded in a sentence with just a single word like ‘every’ or ‘none’. With multi-word combos like “keeps on” we get an even better taste of recursive properties.
‘Function’ is a verb, literally but all verbs are functions, figuratively. They might be complex functions which are Russian dolls of subsequent component functions, or they might be very straightforward simple ones. There might be operators which take in a noun as input and modify the meaning.
When you say ‘I will perform’, no one knows what you will perform. To perform, by itself has very little meaning (in contrast with say, to sleep or even to say). If you, however, say ‘I will perform tympanoplasty’, people would immediately appreciate that you are intending to perform a combination of ‘Myringoplasty and Ossiculoplasty’, thus decomposing the complex verb into relatively simpler ones!
What makes natural language natural?
There might be a better answer to this question, but the main reason why natural language triumphs over the ones we code in, is because the computers that parse it, can make incredible connections through intuition and instincts. Like, you might someone and bang on start to talk about how the Spurs defeated the Red Devils. You wouldn’t even have to highlight the game nor the proper names of the teams and people might already have made the connections. No time wasted in
import epl from football, you get direct action!
A better example would be, how you can talk about your job, pause for a second, make a remark on the weather, sneeze hard, say bless me, resume your conversation and the person listening to you will still appreciate what’s being told.
Another more philosophical difference is that natural languages are spoken from person to person. So, when you are listening to someone or reading someone, your brain is personifying the speaker or author with all sorts of extra information it can manage from the surroundings. When you are coding, the compiler or animation engine doesn’t care about you nor about itself. It is just an inanimate piece of electricity powered electronic state encoded on doped silicon.
The last point is that natural languages can have double meanings and such situations are often intended. This, however, is a big no-no for programming. You can have entire paragraphs with underlying sarcastic undertones or euphemisms and if you dare, a double entendre sprinkled somewhere in between and an observant listener will appreciate it without any prior intimation.
Chicken or the Egg
You might argue, high-level programming languages are built to resemble naturally languages. Therefore, it is very circular to simply try to describe natural language in terms of the former. This is a classic chicken and egg problem and for this one, in particular, we are convinced the chicken came first.
So, is this discussion futile?
Maybe so! Afterall speaking to a human in python is just as useless as typing out plain Victorian English to a Python compiler. Often when we are knee-deep into debugging (if we are debuggers) or reading literature (if we are bookworms), we forget how orderly our own speech and thought processes are. Every time we use words like ‘every time’, ‘than’, ‘then’, ‘if’, the conjunctions, the prepositions and in fact any part of speech, we are logically adding some new information to our conversation.
Appreciating these logical constructs in our qualitative languages may not change civilization overnight, but it’ll help us gradually in two fronts. One, it will make you a more observant speaker who possesses the skill to choose the correct word at the correct moment. On the other hand, understanding spoken language in terms of coding language might open up new frontiers in natural language processing.
After all, who wouldn’t just love to have a long and thoughtful conversation with Siri and Alexa instead of being replied with “I searched the web and found these results…” to every spiritual question you ask ’em.
|
OPCFW_CODE
|
Important notice regarding the future of PyFITS
All of the functionality of PyFITS is now available in Astropy as the
astropy.io.fits package, which is now publicly available.
Although we will continue to release PyFITS separately in the short term, including any critical bug fixes, we will eventually stop releasing new versions of PyFITS as a stand-alone product. The exact timing of when we will discontinue new
PyFITS releases is not yet settled, but users should not expect PyFITS releases to extend much past early 2014. Users of PyFITS should plan to make suitable changes to support the transition to Astropy on such a timescale. For the vast majority
of users this transition is mainly a matter of changing the import statements in their code--all APIs are otherwise identical to PyFITS. STScI will continue to provide support for questions related to PyFITS and to the new
astropy.io.fits package in Astropy.
PyFITS provides an interface to FITS formatted files in the Python scripting language and PyRAF, the Python-based interface to IRAF. It is useful both for interactive data analysis and for writing analysis scripts in Python using FITS files as either input or output. PyFITS is a development project of the Science Software Branch at the Space Telescope Science Institute.
PyFITS and all necessary modules are included with the stsci_python distribution and associated updates to it (though what is included there may not be the very latest version). PyFITS does not require PyRAF however. It may be used independently so long as numpy is installed.
The manual provides a tutorial on how to use PyFITS with FITS images and FITS tables, along with an extensive description of all the methods (currently) available for working with FITS files. This manual, however, is only a DRAFT document and is very much a work-in-progress. An HTML version of the manual, including API documentation, is also available.
The current version of PyFITS is v3.3.0 (July 17 2014).
PyFITS 2.3.1 and later is distributed under a BSD license. Unfortunately, PyFITS version 2.0 (January 30 2009) through version 2.3 (May 11 2010) contained some code in the Compressed Image HDU extension module that was covered under a GNU General Public License. Prior to PyFITS version 2.0 and beginning again with PyFITS version 2.3.1 (June 3 2010) PyFITS is covered under a BSD license.
|
OPCFW_CODE
|
#include "gpu.h"
#include "dma.hpp"
#include "io.h"
#include <psxgpu.h>
DISPENV disp;
DRAWENV draw;
static void setResolution(int w, int h) {
SetDefDispEnv(&disp, 0, 0, w, h);
SetDefDrawEnv(&draw, 0, 0, 1024, 512);
if (h == 480) {
disp.isinter = true; // Interlace mode
draw.dfe = true; // Drawing to display area (odd and even lines)
}
PutDispEnv(&disp);
PutDrawEnv(&draw);
}
void initVideo(int width, int height)
{
ResetGraph(0);
setResolution(width, height);
SetDispMask(1);
}
void fillRect(int x, int y, int w, int h, int r, int g, int b) {
FILL f;
setFill(&f);
setRGB0(&f, r, g, b);
setXY0(&f, x, y);
setWH(&f, w, h);
DrawPrim(&f);
}
void clearScreenColor(uint8_t r, uint8_t g, uint8_t b) {
fillRect(0, 0, 512, 256, r, g, b);
fillRect(512, 0, 512, 256, r, g, b);
fillRect(0, 256, 512, 256, r, g, b);
fillRect(512, 256, 0x3f1, 256, r, g, b);
}
void clearScreen() {
clearScreenColor(0, 0, 0);
}
void setMaskBitSetting(bool setBit, bool checkBit) {
DR_MASK mask;
setDrawMask(&mask, setBit, checkBit);
DrawPrim(&mask);
}
void gpuNop() {
writeGP0(0, 0);
}
void writeGP0(uint8_t cmd, uint32_t value) {
DR_TPAGE p;
p.code[0] = value;
setlen( &p, 1 );
setcode( &p, cmd );
DrawPrim(&p);
}
void writeGP1(uint8_t cmd, uint32_t data) {
uint32_t *GP1 = (uint32_t*)0x1f801814;
(*GP1) = (cmd << 24) | (data&0xffffff);
}
uint32_t readGPU() {
uint32_t* GPUREAD = (uint32_t*)0x1f801810;
return *GPUREAD;
}
void vramPut(int x, int y, uint16_t pixel) {
CPU2VRAM buf;
setcode(&buf, 0xA0); // CPU -> VRAM
setlen(&buf, 4);
buf.x0 = x; // VRAM position
buf.y0 = y;
buf.w = 1; // Transfer size - 1x1
buf.h = 1;
buf.data = pixel; // pixel (lower 16bit)
DrawPrim(&buf);
}
uint32_t vramGet(int x, int y) {
VRAM2CPU buf;
setcode(&buf, 0xC0); // VRAM -> CPU
setlen(&buf, 3);
buf.x0 = x; // VRAM position
buf.y0 = y;
buf.w = 1; // Transfer size - 1x1
buf.h = 1;
DrawPrim(&buf);
writeGP1(4, 3); // DMA Direction - VRAM -> CPU
// Wait for VRAM to CPU ready
while ((ReadGPUstat() & (1<<27)) == 0);
return readGPU();
}
void vramWrite(int x, int y, int w, int h, uint16_t* ptr) {
CPU2VRAM buf;
setcode(&buf, 0xA0); // CPU -> VRAM
setlen(&buf, 3);
buf.x0 = x;
buf.y0 = y;
buf.w = w;
buf.h = h;
DrawPrim(&buf);
volatile uint32_t *GP0 = (uint32_t*)0x1f801810;
for (int y = 0; y<h; y++) {
for (int x = 0; x<w; x+=2) {
uint32_t data = 0;
data |= *ptr++;
data |= (*ptr++) << 16;
*GP0 = data;
}
}
}
void vramWriteDMA(int x, int y, int w, int h, uint16_t* ptr) {
CPU2VRAM buf;
setcode(&buf, 0xA0); // CPU -> VRAM
setlen(&buf, 3);
buf.x0 = x;
buf.y0 = y;
buf.w = w;
buf.h = h;
DrawPrim(&buf);
writeGP1(4, 2); // DMA Direction - CPU -> VRAM
using namespace DMA;
DMA::masterEnable(Channel::GPU, true);
DMA::waitForChannel(Channel::GPU);
write32(baseAddr(Channel::GPU), MADDR((uint32_t)ptr)._reg);
write32(blockAddr(Channel::GPU), BCR::mode1(0x10, w * h / 0x10 / 2)._reg);
write32(controlAddr(Channel::GPU), CHCR::VRAMwrite()._reg);
DMA::waitForChannel(Channel::GPU);
}
void vramReadDMA(int x, int y, int w, int h, uint16_t* ptr) {
VRAM2CPU buf;
setcode(&buf, 0xC0); // VRAM -> CPU
setlen(&buf, 3);
buf.x0 = x;
buf.y0 = y;
buf.w = w;
buf.h = h;
DrawPrim(&buf);
writeGP1(4, 3); // DMA Direction - VRAM -> CPU
// Wait for VRAM to CPU ready
while ((ReadGPUstat() & (1<<27)) == 0);
using namespace DMA;
DMA::masterEnable(Channel::GPU, true);
DMA::waitForChannel(Channel::GPU);
write32(baseAddr(Channel::GPU), MADDR((uint32_t)ptr)._reg);
write32(blockAddr(Channel::GPU), BCR::mode1(0x10, w * h / 0x10 / 2)._reg);
write32(controlAddr(Channel::GPU), CHCR::VRAMread()._reg);
DMA::waitForChannel(Channel::GPU);
}
void vramToVramCopy(int srcX, int srcY, int dstX, int dstY, int w, int h)
{
VRAM2VRAM buf;
setcode(&buf, 0x80); // VRAM -> VRAM
setlen(&buf, 4);
buf.x0 = srcX;
buf.y0 = srcY;
buf.x1 = dstX;
buf.y1 = dstY;
buf.w = w;
buf.h = h;
DrawPrim(&buf);
}
|
STACK_EDU
|
Dude, this would be genius, if it wasn't for the fact that...
Nobody plays 1.7.3. Still, i LOVE this version, and if i still used it i would use this mod, and i appreciate someone would make a mod, in 2015, dedicated to my favorite version of Minecraft. (Rant-ish thing incoming) I just think the game has gone downhill too much since beta 1.7. It's become more of an RPG, too... Diluted, for it's simple outset. This was meant to be a game about building whatever. Now it's a game focused MUCH more heavily of survival and killing, mining, and exploring. Yes, if you wanted to do that in 1.7, you could, but it was still much simpler in just that set than what it is today. I dont know why they added all that stuff, honestly, just a creative mode would have been fine, and all the content from today BESIDE the new combat (Crits and whatnot, not the 1.9 update), experience, hunger, and other stuff. I still love 1.7.3 and wish it was still a largely supported minecraft version, like, a version called Minecraft Classic- And i dont mean the CLASSIC versions from 2009/2010. I mean basically, a revamped version with 1.7.3 with some of the new stuff, like blocks from beta 1.8 until now, no strongholds, no XP, hunger, Imagine it. Bring back minecraft 1.7.3, but better. That's be a great idea.
tl;dr Everything newer than Beta 1.7 sucks and doesn't suck.
Sorry 'bout that rant. I just... miss the golden days of Minecraft, Y'know? Mods., please don't warn me for going off track ;_; My nostalgia is to blame. But seriously, please don't. all that was technically related to this mod, so... Yeah.
I am not sure about adding the 1.8 and further blocks because some players might think the mod has become "tainted" with 1.8 content. I don't know what people want more, 1.7 with new blocks or a completely different 1.7 version
That is true, Tom. You do make a good point. It depends on what you want, what seems that it would FIT in 1.7.3 that could've gone in. Stone bricks, i think, (Atleast i think) would fit fine. Along with some other blocks, like even the modern sponges.
I added the stone bricks you always wanted! Together with a new variant that looks like small bricks and looks good for prison cells
Download Revision 4 at the Original Post!
There might be some quirks when updating to this one because some internals changed to be future proof. Like redstone block now has the same ID as it does in regular minecraft so its compatible when switching maps
And legacy blocks like classic gold are now ID 2+ID so Gold Block is 41, its legacy ID is 241, iron is 42 so 242 and so on.
PS. Make sure you use the NEW inventory editor with Revision 4 or you will get the wrong blocks!
Hey man, I've been developing my own mod called Back 2 Beta, it isn't compatible with ModLoader (it dosen't use it) or most mods that edit base classes, because this edits base classes like how a real update would. It's currently at Beta 1.7.9 and the Beta 1.8 (that never was) version is being developed. Want to help me? I've added a good bit of features and tweaks to my mod already. It brings back the old cobblestone texture but with a new smooth cobblestone block which uses the new texture, it also has stone bricks but only one variant, it's crafted with 4 stone brick items.
I don't want to make something that is not backwards compatible with beta 1.7.3. I can change any class I want the way I do it however it always remains compatible with mods created for 1.7.3. It uses a modified version of ModLoader that is backwards compatible with old mods but has some new methods that let you take control over classes but still give priority to 1.7.3 mods.
Currently 1.7.4 can connect to 1.7, 1.7.1, 1.7.2 and 1.7.3 servers.
|
OPCFW_CODE
|
Window-based FIR filter design
FIR Bandpass Filter
Design a 48th-order FIR bandpass filter with passband rad/sample. Visualize its magnitude and phase responses.
b = fir1(48,[0.35 0.65]); freqz(b,1,512)
FIR Highpass Filter
chirp.mat. The file contains a signal,
y, that has most of its power above
Fs/4, or half the Nyquist frequency. The sample rate is 8192 Hz.
Design a 34th-order FIR highpass filter to attenuate the components of the signal below
Fs/4. Use a cutoff frequency of 0.48 and a Chebyshev window with 30 dB of ripple.
load chirp t = (0:length(y)-1)/Fs; bhi = fir1(34,0.48,'high',chebwin(35,30)); freqz(bhi,1)
Filter the signal. Display the original and highpass-filtered signals. Use the same y-axis scale for both plots.
outhi = filter(bhi,1,y); subplot(2,1,1) plot(t,y) title('Original Signal') ys = ylim; subplot(2,1,2) plot(t,outhi) title('Highpass Filtered Signal') xlabel('Time (s)') ylim(ys)
Design a lowpass filter with the same specifications. Filter the signal and compare the result to the original. Use the same y-axis scale for both plots.
blo = fir1(34,0.48,chebwin(35,30)); outlo = filter(blo,1,y); subplot(2,1,1) plot(t,y) title('Original Signal') ys = ylim; subplot(2,1,2) plot(t,outlo) title('Lowpass Filtered Signal') xlabel('Time (s)') ylim(ys)
Multiband FIR Filter
Design a 46th-order FIR filter that attenuates normalized frequencies below rad/sample and between and rad/sample. Call it
bM. Compute its frequency response.
ord = 46; low = 0.4; bnd = [0.6 0.9]; bM = fir1(ord,[low bnd]); [hbM,f] = freqz(bM,1);
bM so that it passes the bands it was attenuating and stops the other frequencies. Call the new filter
bW. Display the frequency responses of the filters.
bW = fir1(ord,[low bnd],"DC-1"); [hbW,~] = freqz(bW,1); plot(f/pi,mag2db(abs(hbM)),f/pi,mag2db(abs(hbW))) legend("bM","bW",Location="best") ylim([-75 5]) grid
bM using a Hann window. (The
"DC-0" is optional.) Compare the magnitude responses of the Hamming and Hann designs.
hM = fir1(ord,[low bnd],'DC-0',hann(ord+1)); hhM = freqz(hM,1); plot(f/pi,mag2db(abs(hbM)),f/pi,mag2db(abs(hhM))) legend("Hamming","Hann",Location="northwest") ylim([-75 5]) grid
bW using a Tukey window. Compare the magnitude responses of the Hamming and Tukey designs.
tW = fir1(ord,[low bnd],'DC-1',tukeywin(ord+1)); htW = freqz(tW,1); plot(f/pi,mag2db(abs(hbW)),f/pi,mag2db(abs(htW))) legend("Hamming","Tukey",Location="best") ylim([-75 5]) grid
n — Filter order
Filter order, specified as an integer scalar.
For highpass and bandstop configurations,
uses an even filter order. The order must be even because odd-order
symmetric FIR filters must have zero gain at the Nyquist frequency.
If you specify an odd
a highpass or bandstop filter, then
n by 1.
Wn — Frequency constraints
scalar | two-element vector | multi-element vector
Frequency constraints, specified as a scalar, a two-element
vector, or a multi-element vector. All elements of
be strictly greater than 0 and strictly smaller than 1, where 1 corresponds
to the Nyquist frequency: 0 <
Wn < 1. The Nyquist frequency is half
the sample rate or π rad/sample.
Wnis a scalar, then
fir1designs a lowpass or highpass filter with cutoff frequency
Wn. The cutoff frequency is the frequency at which the normalized gain of the filter is –6 dB.
Wnis the two-element vector
[w1 w2], where
fir1designs a bandpass or bandstop filter with lower cutoff frequency
w1and higher cutoff frequency
Wnis the multi-element vector
[w1 w2 ... wn], where
w2< … <
nth-order multiband filter with bands 0 < ω <
w1< ω <
wn< ω < 1.
ftype — Filter type
Filter type, specified as one of the following:
'low'specifies a lowpass filter with cutoff frequency
'low'is the default for scalar
'high'specifies a highpass filter with cutoff frequency
'bandpass'specifies a bandpass filter if
Wnis a two-element vector.
'bandpass'is the default when
Wnhas two elements.
'stop'specifies a bandstop filter if
Wnis a two-element vector.
'DC-0'specifies that the first band of a multiband filter is a stopband.
'DC-0'is the default when
Wnhas more than two elements.
'DC-1'specifies that the first band of a multiband filter is a passband.
window — Window
fir1 does not automatically increase the
window if you attempt to design a highpass
or bandstop filter of odd order.
kaiser(n+1,0.5) specifies a Kaiser
window with shape parameter 0.5 to use with a filter of order
hamming(n+1) is equivalent to leaving
the window unspecified.
scaleopt — Normalization option
'scale' (default) |
Normalization option, specified as either
'scale'normalizes the coefficients so that the magnitude response of the filter at the center of the passband is 1 (0 dB).
'noscale'does not normalize the coefficients.
fir1 uses a least-squares approximation to
compute the filter coefficients and then smooths the impulse response
Digital Signal Processing Committee of the IEEE Acoustics, Speech, and Signal Processing Society, eds. Programs for Digital Signal Processing. New York: IEEE Press, 1979, Algorithm 5.2.
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Introduced before R2006a
|
OPCFW_CODE
|
Language server will occasionally hog CPU
On a few occasions, the language server starts to consume a lot of CPU. I usually only find out whats happening when my cooling fan spins up. I don't know how to trigger it demand, but it might be related to restarting VSCode after the language server crashes with multiple editors open.
roc nightly pre-release, built from commit 4569770c82c on Fri Dec 29 09:12:45 UTC 2023
Apple M1
One possibility is lots of analysis tasks queued
The upcoming version of the language server in #6134 trys to "debounce" as you type so that it doesn't perform a document analysis on every change, that change may help.
My guess would be that if you do a lot of typing you can get backlogged on analysis and then it just sits there churning for a bit.
Do you find it stops using lots of CPU time if you just leave it for a bit?
Can you get hover docs when it is in this "high cpu" state?
Thanks for taking the time and I wish I had the technical acumen to help you diagnose what is actually happening. Each process just sits at 99% CPU even after I exit VSCode. But I never suspected the language server because it seemed to run fine throughout and was still responsive (I assumed it was Safari as I usually have the browser open when I code).
Hmm, It's also interesting that you have multiple processes... I don't think there should be more than one.
I'm wondering if we are getting stuck in some kind of weird loop and then vscode is abandoning the process when it stops responding and then just starting another one.
A few things I'd like to see:
When you have these stuck processes, do they eventually just disappear?
When you close vscode, does one of the processes close with it?
When you freshly start vscode how many roc_ls processes start? is it just one and the extra ones get added on later?
I'll keep these questions in mind and report back if it happens again (I'll also take a screenshot of the memory tab). Note that it's only happened twice in the past week and a half though I've spent a few hours per day trying to muddle through my Advent of Code backlog.
Under normal operation, I see one process start with each editor window; one closes immediately with each window closing or all close immediately if I just exit VSCode. Though I am just realizing that normally each process uses 10 threads but use 9 threads when they run amok (maybe another clue or just nothing?).
related to #6185?
I've had a similar experience with the roc-ls, and because of this behavior I've stopped using it altogether, which makes writing Roc much more challenging. It'll be an exciting day to see a fix for this 🙏
The popup is new, but the last time around I didn't have Docker running...
I was able to quit VSCode normally but had to kill the language server process. Note the 40 GB memory consumption.
Okay I believe I have figured out what is wrong, It seems like anytime you open an interface file that has a shorthand import of a package, the language server will hang.
eg
interface A
imports [pf.Stdout]
The other case is if you typecheck an app file with a shorthand that is incorrect:
eg
app "a"
imports [notapackage.Stdout]
If you can find instances where it hangs outside of that, I'd be very interested to see an example.
I am working on a fix for the above 2 issues, but it's an annoying thing to solve
@fwgreen The PR to fix this was merged, would you like to check if this is working now?
I'm now getting a more tolerable version of the problem (the CPU hogging might have stopped): The runaway processes seem to halt and possibly start new processes? Note I have three editors open and five language servers running...
After quitting VSCode, I still have two language servers still running...
Okay, now I think this is a different issue. The language server isn't shutting down correctly sometimes, I think I've had this issue too. If you could make a new issue with those screenshots I'll take a look at it as well sometime :)
@fwgreen Could you please mark this as closed ::
|
GITHUB_ARCHIVE
|
Am I buying a reptile to impress people?
Of course owning a snake or lizard can be quite impressive to some people, but the most impressive
thing is to keep a healthy and happy reptile. If you are thinking purely about how cool your friends will think you are, then the chances are that it will all end up in disappointment.
Is there an adult willing to take responsibility for the animals welfare?
If you are young then you must make sure that you have the backing and support of at least some of the adult members of your household, without it your pet could well suffer.
Is there anyone who strongly objects to sharing their home with a reptile?
Many people are afraid of snakes and lizards. This is quite often due to unfamiliarity and the fear goes after learning a bit more about them, but also some people have a deep seated phobia. If anyone in your household is like this, then a reptile may not be the best animal to introduce to your house!!
What will happen with changing circumstances?
Given the right environment reptiles can live a long time, much longer than say mice or hamsters.
If you are young, then hopefully at some point you may go away to college. Unless you are very lucky the chances are you will not be able to take your pet with you. Is there someone who will do this for you when you are away?
Will I spend the time I need to on the animal?
Although reptiles do not require much of your time, they do require some and on a regular basis.
You will need to check daily both on your animals health, water and general state of the vivarium.
If you think this maybe a chore then, a reptile is definitely not the pet for you.
Can I afford the initial costs?
This is a one off expense for both the animal and al the equipment needed. There is no point trying to cut costs at this point as it will inevitably lead to either bad environment for your pet or further costs for you.
Have I the finances to keep the animal?
Apart from the initial purchase of the vivarium and your reptile, there will be additional costs.
You will have to buy a regular supply of food, equipment such as bulbs will need replacing and
sometimes bigger expenses such a replacing a thermostat or veterinary bills.
Am I happy handling my animals food?
Most reptiles in captivity are carnivorous, so will you be happy handling dead mice and rats?
If it is an insect eater you will probably have to deal with locusts and crickets. Also
no matter how careful you are a cricket or two will escape into your household.
Have I the correct environment for my pet?
Not only must you have the space to set up your vivarium, it must also be in the right location.
It must be away from direct sunlight, other pets and also smoke and fumes. You will need to be able
to access the tank easily and it must be in a safe location away from potential damage.
|
OPCFW_CODE
|
Customer Attributes Manager provides you with the power to manage personal and business information of your customers.
OverviewBack to top
Customer Attributes Manager Extension for Magento2 helps you to manage the personal and business information of customers in just a few clicks. It enables you to create customized fields and field types exactly as you intended.
Whether you want to have more fields in the attributes page or different field types like radio button or check boxes or text fields, admin can easily create and manage without the need of a developer.
It is not just the fields and field types, you can even direct the attributes to a specific page like Customer Account Page, Checkout Page and Customer Registration Page.
Customer Attributes Manager empowers you to manage customer accounts in a much better way.
- Create new customer attributes with ease.
- Ability to create new customer attributes with different field types like Text, Text Area, Date, Dropdown, Multi Select, Yes/No.
- Admin can set a default value for the attributes.
- Allow different input validation for the attributes like Alphanumeric, Numeric only, Alpha only, URL, Email and None.
- Ability to allow attributes to show on Manage Customer Grid in Magento Backend.
- Ability to add the attributes to the filter options in Manage Customer Grid in Magento Backend.
- Ability to set the attributes available on search results in Manage Customer Grid in Magento Backend.
- Admin can set the sort order while creating the attributes.
How to create new Customer Attribute?
To enable the extension go to STORES->Configuration->DCKAP->Customer Attributes and set ‘Yes’ to Enabled.
To create new customer attributes, go to CUSTOMER -> Manage attributes
Upon clicking the Manage Attributes option, it will take you to the ‘Manage Customer Attributes’ page.
To create a new attribute, click on ‘Add New Attribute’ button from the right corner. Enter necessary information on the page to create new attributes. Once you furnish the required information, click on ‘Save Attribute’ button.
You may select the page to which the specific customer attribute has to be applied by selecting the ‘Form to use in’ option. Below are the pages in which this extension can be used.
- Customer Registration
- Customer Account Edit Form
- Admin Checkout
Manage Customer Attributes:
All the saved attributes will be displayed in the grid. It is possible to edit specific field for the attributes. Also, there is an option to delete only the user defined attributes.
Note : Store owners can manage only user defined attributes. System attributes can't be modified.
Check our DEMO here
User Name : customer
Password : Cust0mer@3892
Release NotesBack to top
- Compatible with Open Source (CE) : 2.0 2.1
- Stability: Stable Build
Customer Attributes Manager - initial stable version
|
OPCFW_CODE
|
M: US-East AWS Connectivity Issues - fjordan
http://status.aws.amazon.com/
R: rbranson
This appears to be connectivity issues entirely to/from the Internet or other
EC2 regions from a single availability zone in us-east-1. The intra-AZ
networks within us-east-1 have remained available during the event. One of the
AZs we use was affected, but no external traffic flows to it. I noticed this
because an auto-scale group was trying to bring up instances inside of the
affected zone (our us-east-1a) and was unable to contact a server outside of
AWS.
R: cperciva
I'm definitely seeing issues in multiple AZs. It seems to be partly firewall-
related, however: I've seen cases where it's hard to get an initial SYN
through, but once a TCP connection is established it stays established.
R: fjordan
This, in addition to the increase in traffic we detected directly before,
smells of DOS. Also, it is Friday the 13th.
R: tomweingarten
Did anyone else notice a huge spike in incoming network traffic on their EC2
instance immediately before the outage? Roughly 9:55AM EST.
R: justinsb
Did it look like a ddos attack, or do you think something went wrong where you
were getting traffic meant for other EC2 nodes?
I'm not quite sure how you would tell the difference of course...
R: tomweingarten
We didn't get enough data to be able to determine that, but I'd be very
curious to hear if someone else did.
R: rschmitty
Does anyone know why in the world they display a green checkmark with a near
invisible little 'i' for this?
R: iota
There are 4 statuses.
Green checkmark (status 0)
Green checkmark with "info" badge (status 1)
Yellow triangle (status 2)
Red "do not enter" rectangle (status 3)
I suspect that status 0 indicates that they are investigating a problem with
the server, and it switches to status 1 once the problem has been confirmed.
This is also a good example of poor icon design...they aren't self-
explanatory, and so they should not be used.
R: cperciva
What happened to status 2?
R: iota
Good catch. Fixed!
R: jolan
Amazon is continuing the trend of announcing outages 30 minutes after they
start.
Just signed up for a support contract since the status page said everything
was fine.
R: colinbartlett
And by "announcing" you mean indicating everything is a-okay with the green
checkmark but putting a tiny footnote next to it.
R: frabcus
We (ScraperWiki) can still access some of our US East servers. From those, can
daisy chain SSH into the ones that are offline. Those servers can't see the
world, but are working fine and can see other EC2 instances.
R: devy
Hi, can you use port forwarding to get website up on those affected nodes?
R: jpea
I wonder if it extends beyond Amazon, since my gmail now doesn't pull anything
up after 2009, web or IMAP.
R: aquark
I'm getting external monitoring failures that are firing on and off, but have
no problem reaching the servers or the site.
Interestingly newrelic is reporting the site down at the same time it is
reporting a normal level of load on it.
R: joe010
I've recently moved some of our servers over to Digital Ocean, but I'm still
using AWS for DNS since their Route 53 weighted DNS with health checks work as
a basic load balancer for our needs. I'm seeing DNS health checks that point
at individual servers at Digital Ocean that are showing 0.91 for a status (1
being up and 0 being down. The alarms attached to the health checks keep
flipping from "alarm" to "ok" and causing tons of alerts. As of about 15
minutes ago all of my checks started holding steady back at a status of 1 (ok)
Good stuff :)
R: jd007
ELBs are also having problems. One of mine is reporting all instances out of
service (transient error), then all instances in service, intermittently. But
the ELB is never reachable (even when it reports all instances healthy and
up). All instances behind this one are reachable, up and running. US-East-1.
Some of our other instances are reachable but some are not, same as others
have been reporting.
R: sadris
Why does this never happen to AWS West? I should really get to migrating over
with 3 outages in the past 2 years on US East.
R: knodi
It does happen, you just never notice because you don't have instances in US
West.
R: brryant
There are definitely issues with network connectivity between AZs as well as
public internet connectivity.
R: jipumarino
I got into one of our machines that presented the connectivity issues from
another one which was still reachable. It had no external (curl
www.google.com) connectivity. Just two minutes ago it started resolving again.
R: ihaveajob
It looks ok now for us (appfluence.com), but even when it was down, our
website was still up, only the sync services went offline. And even then, they
were accessible from the web server...
R: NotDaveLane
It's region us-east-1c for now, at least from where I'm sitting... I have
instances in other us-east datacenters that are fine.
R: trevyn
Specific availability zones in a region are mapped per-account, so your
east-1c might be my east-1a:
"To ensure that resources are distributed across the Availability Zones for a
region, we independently map Availability Zones to identifiers for each
account. For example, your Availability Zone us-east-1a might not be the same
location as us-east-1a for another account. Note that there's no way for you
to coordinate Availability Zones between accounts."
[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-
reg...](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-
availability-zones.html)
R: ceejayoz
I wonder how that works with new zones. I remember us-east-1e being added
separately to the original four. Presumably, that one's the same for all
accounts that'd already signed up at the time.
R: scrabble
So what is the best way to balance a hosted site between Amazon and a separate
service? Because these connectivity issues suck.
R: bredman
One option would be to use Route 53 weighted round robin (WRR) DNS records and
health checks to accomplish this.
R: jday
this has taken openredis offline:
[https://twitter.com/openredis](https://twitter.com/openredis)
heroku is also reporting issues:
[https://status.heroku.com/incidents/554](https://status.heroku.com/incidents/554)
R: martin_
All of mine just started magically working
R: martin_
I retract that statement!
[http://shutter.io/img/vs6jjs/raw](http://shutter.io/img/vs6jjs/raw)
R: TallboyOne
Aaand were beck up now.
R: knodi
Always on a Friday...
R: jlgaddis
Not just any Friday...
$ date
Fri Sep 13 12:07:58 EDT 2013
R: xdissent
Not just any Friday the 13th...
[http://en.wikipedia.org/wiki/Programmers'_Day](http://en.wikipedia.org/wiki/Programmers'_Day)
R: TallboyOne
Not just any Friday the 13th Programmer's Day...
[http://www.holidayinsights.com/other/fortunecookie.htm](http://www.holidayinsights.com/other/fortunecookie.htm)
R: o0-0o
Down in Manhattan
|
HACKER_NEWS
|
compile tensorflow lite static library using QCC on QNX Platform
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
TensorFlow installed from (source or binary):binary
TensorFlow version (use command below):master latest
Python version: 2.7
Bazel version (if compiling from source):0.7
Describe the problem
my goal is compile tensorflow lite static library using QNX-QCC compiler。 This is possible?
And do I need to modify the code in kernels/*.cc or other souce code , and how do。
Please give me some advice, thx。
the make script is just like below:
make script file:
function make_qnx() {
if [ ! -d $1 ];then
mkdir $1 || exit_popd 1
fi
cd $1
export QNX_ABI=$1
source /opt/qnx660/qnx660-env.sh
~/DevTools/cmake-3.9.0-rc5-Linux-x86_64/bin/cmake -G "Unix Makefiles" -DQNX_PLATFORM_ABI="$1" -DPLATFORM_ABI=qnx -DCMAKE_TOOLCHAIN_FILE=${CURRENT_SCRIPT_DIR}/toolchains/qnx.toolchain.cmake
make
}
qnx.toolchain.cmake file :
cmake_minimum_required(VERSION 2.8)
set(CMAKE_SYSTEM_NAME QNX)
set(QNX_PLATFORM_ABI "$ENV{QNX_ABI}")
if(QNX_PLATFORM_ABI STREQUAL "x86")
set(ARCH_NAME gcc_ntox86)
set(CMAKE_C_COMPILER /opt/qnx660/host/linux/x86/usr/bin/qcc)
set(CMAKE_C_COMPILER_TARGET ${ARCH_NAME})
set(CMAKE_CXX_COMPILER /opt/qnx660/host/linux/x86/usr/bin/QCC)
set(CMAKE_CXX_COMPILER_TARGET ${ARCH_NAME})
elseif(QNX_PLATFORM_ABI STREQUAL "armv7")
set(ARCH_NAME gcc_ntoarmv7le)
set(CMAKE_C_COMPILER /opt/qnx660/host/linux/x86/usr/bin/qcc)
set(CMAKE_C_COMPILER_TARGET ${ARCH_NAME})
set(CMAKE_CXX_COMPILER /opt/qnx660/host/linux/x86/usr/bin/QCC)
set(CMAKE_CXX_COMPILER_TARGET ${ARCH_NAME})
else()
message( SEND_ERROR "Unknown QNX_PLATFORM_ABI="${QNX_PLATFORM_ABI}" is specified." )
endif()
@andrehentz @aselle
It should be possible. You likely will have to modify some things. Could you give it a try and, we can help you when you run into trouble you can't solve. You can look at the Makefile for a simple build environment. You will likely not be able to build using bazel, since it doesn't support QNX toolchains.
It has been 14 days with no activity and the awaiting response label was assigned. Is this still an issue? Please update the label and/or status accordingly.
Nagging Awaiting Response: It has been 14 days with no activityand the awaiting response label was assigned. Is this still an issue?
Nagging Awaiting Response: It has been 14 days with no activityand the awaiting response label was assigned. Is this still an issue?
Has this been resolved? It would be very helpful if you could please publish steps or advice
Nagging Awaiting Response: It has been 14 days with no activityand the awaiting response label was assigned. Is this still an issue?
It has been 14 days with no activity and the awaiting response label was assigned. Is this still an issue?
Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!
I would also like to have tensorflow lite on QNX. Hence, I am willing to try with some support.
So, do I need to port Bazel first to QNX ?
@resorcap Hi, I am trying to compile tflite for snapdragon sa8295p with qnx710. How do you change the build scripts and lib? I would appreciate if you could give me some advice.
|
GITHUB_ARCHIVE
|
import {newFactory, CreepOrder, fullfillOrder as fullfillOrders} from 'utils/creepfactory'
import * as Tower from 'role/tower'
type RoomName = "E22N27" | "E22N26"
type RoomOpts = {roomName: RoomName}
type StaticHarvestMission = {
type: 'StaticHarvest',
creep: Creep,
source: Source,
}
type UpgradeControllerMission = {
type: 'UpgradeController',
controller: StructureController
}
type Mission = StaticHarvestMission;
type MoveToPlan = {
plan: 'MoveTo',
creep: Creep,
target: {x: number, y: number}
}
type Plan = MoveToPlan;
export const run = (opts: RoomOpts) => {
const room = Game.rooms[opts.roomName]
const factory = newFactory({room})
const sources = room.find(FIND_SOURCES)
const orders: CreepOrder[] = []
const missions: Mission[] = []
Tower.run(room, 100)
if (!room.controller){
console.log("No controller in room " + room.name);
return
}
if (room.controller.my){
//missions.push({type: 'UpgradeController', controller: room.controller})
} else {
console.log("Claim controller not implemented");
return
}
for (let i in sources) {
let source = sources[i];
let res = factory.creep({ name: `harvester:${i}`, body: ['work', 'work', 'work', 'work', 'move', 'move']});
if (res instanceof CreepOrder){
orders.push(res);
} else {
missions.push({type: 'StaticHarvest', creep: res, source: source})
}
}
const plans: Plan[] = [];
for (let mission of missions){
const creep = mission.creep;
const container = mission.source.pos.findInRange(FIND_STRUCTURES, 1, {filter: s => s.structureType === STRUCTURE_CONTAINER})[0] as StructureContainer
if (container){
if (creep.pos.isEqualTo(container)){
if (container.store.getFreeCapacity('energy') > 0){
creep.harvest(mission.source)
}
} else {
creep.moveTo(container);
}
}
}
fullfillOrders(orders);
}
|
STACK_EDU
|
Oracle Corp. handed database administrators a heavy patch load Tuesday for 82 critical flaws affecting a range of products. Attackers could exploit the security holes to access sensitive information, overwrite files or launch SQL injection attacks.
The Redwood Shores, Calif.-based vendor released few details on what the flaws are, but several third-party researchers who discovered some of the vulnerabilities have released information on their own. That's one reason Cupertino, Calif.-based AV giant Symantec Corp. Tuesday raised its Threatcon to Level 2 on a 1-to-4 scale.
"The DeepSight Threat Analyst Team is elevating the ThreatCon to Level 2" because of the patch release, Symantec said in an e-mail advisory. "This critical patch update addresses 82 issues across multiple Oracle products. Although Oracle has not released technical details regarding these issues to the public, technical information regarding several of the vulnerabilities has already been posted to public mailing lists. This additional information may reduce the amount of time that an attacker will require to isolate and exploit these vulnerabilities."
An advisory from Danish vulnerability clearinghouse Secunia revealed some of the early details:
- Input passed to various parameters in the procedures within the DBMS_DATAPUMP, DBMS_REGISTRY, DBMS_CDC_UTILITY, DBMS_CDC_PUBLISH, DBMS_METADATA_UTIL, and DBMS_METADATA_INT Oracle PL/SQL packages is not properly sanitized before being used in a SQL query. Attackers could exploit this to manipulate SQL queries by injecting arbitrary SQL code. The flaws affect Oracle 10g Release 1 (10.1).
- Input passed to various parameters in the ATTACH_JOB, HAS_PRIVS, and OPEN_JOB procedures within the SYS.KUPV$FT package is not properly sanitized before being used in a SQL query. This can be exploited to manipulate SQL queries by injecting arbitrary SQL code. This also affects Oracle 10g Release 1.
- Input passed to various parameters in several procedures within the SYS.KUPV$FT_INT package is not properly sanitized before being used in a SQL query. This can be exploited to manipulate SQL queries by injecting arbitrary SQL code. This affects Oracle 10g Release 1.
- Design errors in the Oracle Database cause the Oracle TDE (Transparent Data Encryption) wallet password to be logged in clear text, and the master key for the TDE wallet to be stored unencrypted. This affects Oracle Database 10g Release 2 (10.2.0.1).
- Some errors in the reports component of the Oracle Application Server can be exploited to read parts of any files or overwrite any files via Oracle Reports. This affects versions 184.108.40.206 through 10.1.0.2.
- Input passed to the AUTH_ALTER_SESSION attribute in a TNS authentication message is not properly sanitized before being used in an SQL query. This can be exploited to manipulate SQL queries by injecting arbitrary SQL code. Successful exploitation allows execution of arbitrary SQL queries with SYS user privileges. This affects Oracle 8i (8.1.7.x.x), Oracle 9i (220.127.116.11), Oracle 10g Release 1 (10.1.0.4.2), and Oracle 10g Release 2 (10.2.0.1.0).
In total, the various flaws affect the following products:
- Oracle Database 10g Release 2, version 10.2.0.1
- Oracle Database 10g Release 1, versions 10.1.0.3, 10.1.0.4, 10.1.0.5
- Oracle9i Database Release 2, versions 18.104.22.168, 22.214.171.124
- Oracle8i Database Release 3, version 126.96.36.199
- Oracle Enterprise Manager 10g Grid Control, versions 10.1.0.3, 10.1.0.4
- Oracle Application Server 10g Release 2, versions 10.1.2.0.0, 10.1.2.0.1, 10.1.2.0.2, 10.1.2.1.0
- Oracle Application Server 10g Release 1 (9.0.4), versions 188.8.131.52, 184.108.40.206
- Oracle Collaboration Suite 10g Release 1, versions 10.1.1, 10.1.2
- Oracle9i Collaboration Suite Release 2, version 220.127.116.11
- Oracle E-Business Suite Release 11i, versions 11.5.1 through 11.5.10 CU2
- Oracle E-Business Suite Release 11.0
- PeopleSoft Enterprise Portal, versions 8.4, 8.8, 8.9
- JD Edwards EnterpriseOne Tools, OneWorld Tools, versions 8.95.F1, SP23_L1
Pete Finnigan, an Oracle expert and author of Oracle Security Step By Step assessed the flaws and fixes in his: blog Tuesday:
"This seems like a good mixed bag of fixes, quite a lot in total and this time it seems possible to isolate the areas affected in more cases due to the more explicit naming of some packages, programs and commands," he said.
|
OPCFW_CODE
|
You would have heard it Riddle?
Hit me once I start crying ,
Hit me again I sleep like a baby .
Touch my neighbors I undergo metamorphosis ,
Keep hitting them I create time travel.
And I am there where u all watch.
What am I ?
hint : sometimes i am shown as parlleotriangallogram .
Is the use of "u" instead of "you" important to the puzzle?
Or "me metamorphosis"?
@feelinferrety slightly..yes
I have to admit I'm impressed--the only Google results for parlleotriangallogram take you to this page.
Google disagrees. :P
It is
The play/pause button (like on youtube)
Hit me once I start crying ,
when you hit play
Hit me again I sleep like a baby .
when you hit pause
Touch my neighbors I undergo metamorphosis ,
the next button ???
Keep hitting them I create time travel.
forward
And I am there where u all watch.
when you watch a video
It could be a
digital wrist watch
As,
Hit me once I start crying,
For verifying the feature of alarm, we can hit specific button to make the watch make sound
Hit me again I sleep like a baby .
hit the same button to keep it off
Touch my neighbors I undergo me metamorphosis ,
These can be lights on / change colors feature of few digital watches
Keep hitting them I create time travel.
Some buttons when kept on hitting, increases minutes, hours, dates, months and years - a kind of time travel (either future or past)
And I am there where u all watch.
Is of course located on a wrist to watch !
Probably applies for a phone as well
It could be
A video player
Because...
You press once,
It "cries", as in the speakers play the content (assuming the video contains sound)
You press it again,
It stops the video and it becomes quiet as a sleeping baby
Touch my neighbors,
As in the buttons for quality and resolution etc...
Keep on pressing
It fast forwards, like on YouTube
This was the answer I was going to put. The only difference I was thinking was for: "Touch my neighbors I undergo me metamorphosis" would be other videos and the display plays a different video in the same place, as if it morphed.
Is it a:
Television?
Hit me once I start crying ,
The power button turns it on, creating sound
Hit me again I sleep like a baby .
The power button turns it off again
Touch my neighbors I undergo me metamorphosis ,
A remote control can change the channel, presenting a different series of shapes and sounds to the viewer (alternatively, DVD players, DVR etc. connected to the TV can have much the same effect)
Keep hitting them I create time travel.
Lots of TV channels have a +1 equivalent, showing the same shows an hour later. (alternatively, DVD players, DVR etc. have skip backward/forward options)
And I am there where u all watch.
Generally the centrepiece of a lounge/living room, where everyone can see it
My first thought was a television remote, similar logic. (I don't think this should be more than a comment, though)
It's:
A computer laptop's power button.
Hit me once I start crying,
You hit it once it starts making a noise
Hit me again I sleep like a baby .
Hit it again it either goes to sleep or hibernate (as configured)
Touch my neighbors I undergo me metamorphosis ,
Touch neighboring buttons and you make it undergo metamorphosis
Keep hitting them I create time travel.
Keep hitting them u create time travel - u could travel to future if you are preparing an article or a report. You can travel to past and recall what u did wrong with your work/game.
And I am there where u all watch.
And the power button is there where we all watch (near the screen).
You are
The Snooze Button
Hit me once I start crying ,
if you hit the snooze button, on some alarm clocks it will play the alarm sound
Hit me again I sleep like a baby .
sleep = snooze. You hit the snooze button to have the alarm 'sleep' for a few more minutes, before 'waking up' and going off again.
Touch my neighbors I undergo me metamorphosis ,
The other buttons on the alarm clock can be used to set and change the alarm time
Keep hitting them I create time travel.
You can also use them to change the time of the clock itself.
And I am there where u all watch.
a 'watch' is another kind of clock, although not usually one with a snooze button
|
STACK_EXCHANGE
|
A majority of your success in Diablo 3 will depend on your items. Items are at the core of every Diablo game, so it doesn’t surprise us that there are many mechanics and systems in the game associated with them. We’ve recently covered socketing in Diablo II, and now it’s time to explain this mechanic in Diablo 3 as well. One thing, in particular, is of interest to many returning and new players, and that’s adding sockets to items. Before we begin with the analysis, let’s answer the base question first, can you add sockets to items in Diablo 3?
Sockets make your items more powerful
Sockets are holes in your items that serve as placeholders for gems, runes, and jewels. By empowering your items with gems, runes, and jewels, you can greatly increase the stats of your weapons and other types of gear. The number of grey circles you see on your user interface corresponds to a single socket. A single socket can hold only 1 item (for example, a single weapon socket can only hold one gem). Socketing can make your items extremely powerful, and this is why there are certain limits implemented.
The maximum number of sockets that you can have on a certain item was greatly reduced in Diablo 3 when compared to Diablo 2, but this balanced itself out with the fact that several new types of items can have sockets, such as jewelry. The maximum number of sockets that an item can have depends greatly on the type of item. Weapons, Helms, Shields, Off-Hand items, Amulets, and Rings can have only a single socket. Pants can have two sockets, and Armor can have a maximum of 3 sockets. Under the best circumstances, the most sockets you can have is 11, and this number cannot be changed.
You can replace the gems, runes, or jewels in the sockets at any time by visiting the jeweler. The items extracted from the sockets can be salvaged (one more change when compared to Diablo 2).
Can you add sockets to items?
You can have sockets to items, but only if the item doesn’t have one, but it could have. For example, you can add a single socket to a weapon that doesn’t have a socket, but you can’t add an additional socket. The same goes for any other item type. Sockets can be added only if they don’t go over the already established limit. You can add sockets to items in Diablo 3 by using Ramaladni’s Gift or visiting an Artisan. In the rest of this post, we’re going to explain both approaches.
Adding a socket to an item via Ramaladni’s Gift
Ramaladni’s Gift is a torment-only random world drop that allows you to add a socket to a weapon that already doesn’t have one. Additionally, you can use Ramaladni’s Gift to modify a weapon with a socket, but the socket has to be removed prior to the attempt through enchanting.
You cannot use Ramaladni’s Gift to add a second socket to your weapons since it’s impossible for a weapon to have two sockets.
Ramaladni’s Gift is a single-use item that will be destroyed after you use it. It can be used on a single weapon twice only if the first socket has been removed.
Adding a socket to an item by visiting Mystic Artisan
You can add a socket to an item that already doesn’t have one by visiting Mystic Myriam Jahzia. Myriam Jahzia was added to the Reaper of Souls expansion, and she offers transmogrification and enchanting services. Through enchanting, you can select one primary property to re-roll into a socket. The number of sockets you acquire through enchanting cannot exceed the maximum limit of sockets that an item can have. You cannot re-roll legendary special properties.
Re-rolling property is not free, and it usually costs gold and certain materials. The more powerful the item is, the rarer the material needed will be, and overall the cost of enchanting will be higher. You cannot use enchanting services to re-roll additional affix into a second socket (if the item cannot have two sockets), and once the item has been modified through enchanting, it cannot be traded to another player. It becomes account-bound.
You can find Mystic Artisan in all major times if you have completed Act 5. As of patch 2.2.1, you don’t have to own Reaper of Souls expansion to have access to Mystic Artisan. This was a welcome change as originally, Diablo 3 Vanilla players did not have access to her services.
As you can see, players can add sockets to items in Diablo 3, and they have two ways of doing it. First, you can use an item called Ramaladni’s Gift to add a socket to a weapon that doesn’t have one but only once. The item also works only on weapons, so don’t attempt to use it on jewelry or other types of items. Second, you can visit Mystic Artisan in towns to re-roll a single primary property of an item into a socket, but only if the item doesn’t have a socket already. Special legendary properties cannot be re-rolled, and it will cost you. The cost depends on the quality of an item. You can likewise employ enchanting services to remove a socket for an item.
Have something to add? Let us know in the comments below!
|
OPCFW_CODE
|
The topic of this article may not meet Wikitia's general notability guideline.
|Alma mater||Rensselaer Polytechnic Institute|
Ewa Deelman, is a Research Professor of computer science at the University of Southern California, Information Sciences Institute. She is a principal investigator (PI) of the Pegasus Software Project. Deelman is a Research Director of Science Automation Technologies and a Principal Scientist at the University of Southern California, Information Sciences Institute. She is an American Academy of Arts and Sciences Fellow and IEEE Fellow.
Deelman attended Wells College where she obtained her B.A. in mathematics in 1987. She received her M.S. in computer science at State University of New York in 1991, and later went on to earn her Ph.D. in computer science at Rensselaer Polytechnic Institute in 1997.
Her current research interests include scientific workflow systems, cloud computing, resource management, with particular emphasis on scientific workflow system management. She applies advances in knowledge technologies for management of large amounts of data and metadata.
- E. Deelman, G. Singh, M. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, B. G. Berriman, J. Good, A. Laity, J. C. Jacob, and D. S. Katz, "Pegasus: a Framework for Mapping Complex Scientific Workflows onto Distributed Systems", Scientific Programming Journal; 13(3), pp. 219-237 (2005)
- M. Malawski, G. Juve, E. Deelman, and J. Nabrzyski, "Algorithms for cost- and deadline-constrained provisioning for scientific workflow ensembles in IaaS clouds", Future Generation Computer Systems; 48, pp. 1-18 (2015)
- Ewa Deelman Named AAAS Fellow, by American Association for the Advancement of Science; published October 19, 2019; retrieved March 29, 2020
- IEEE Computer Society Members Elevated to Fellow for 2018, by IEEE Computer Society (Press Room), in Institute of Electrical and Electronics Engineers; published December 11, 2017; retrieved March 29, 2020
- HPDC 2015 Achievement Award, in High-Performance Parallel and Distributed Computing; published June 19, 2015; retrieved March 29, 2020
- Deelman, Ewa. (1997) Performance Optimization of Parallel Discrete Event Simulation of Spatially Explicit Problems(PhD). Rensselaer Polytechnic Institute; retrieved March 29, 2020
- "Prof Ewa Deelman, research profile – personal details (The USC Information Sciences Institute (ISI)". Retrieved 27 March 2020.
- "SciTech – Science Automation Technologies". Retrieved 2020-04-06.
- "Ewa Deelman - Google Scholar Citations". scholar.google.co.in. Retrieved 2020-04-06.
- Ewa Deelman official website
- Ewa Deelman on Linkedin
- Ewa Deelman on Twitter
- Ewa Deelman on Google scholar
|
OPCFW_CODE
|
browser slows down when database contains a lot of images
Hello,
I currently have two deployments that use the viewer, one has 17000+ images and another has 24000+, and they both run considerably slower compared to when the database didn't have as much. Is it possible to make a simple edit in the code that will disable the initial querying of the entire database so that users will be required to use the search/filter option before anything reaches the front end?
Thank you
What kind of PACS are you using? What default date filter are you using?
Hello, thanks for the reply. The PACS i'm using is DCM4CHEE. And what are you referring to with default date filter? I didn't really edit the code apart from adding a login/accounts option. This current deployment though is not the latest version available here. I'm not sure exactly how old/what version it is though if that information is needed.
We have a default number of days setting here:
https://github.com/OHIF/Viewers/blob/ebe0a40f3ef5fa7d13c66e7076ddeb1f42e755b5/Packages/ohif-study-list/client/components/studylist/studylistResult/studylistResult.js#L256
https://github.com/OHIF/Viewers/blob/33755f7b8b92e33dd45974315fb6c7fe779df8f8/Packages/ohif-servers/both/schema/servers.js#L184
https://github.com/OHIF/Viewers/blob/830146441dff80a76c13a51c94ddc644d5348e8f/config/ClearCanvasDIMSE.json#L35
If this is set at 1, you should hopefully only be loading the most recent studies when you first load the study list. Could you check if that helps?
ok i'll try it out and i'll let you know how it goes. thank you
hello, I tried it and it worked on the newer version of the code that i'm currently working on. I added the "studyListDateFilterNumDays": 1 code on the DIMSE file and it filtered out the initial query.
However, the one currently deployed is running an older version and it doesn't work on it. I tried adding the variable on the DIMSE file, but it didn't change anything. The Packages folder doesn't contain "ohif-servers" nor "ohif-study-list" folders inside. As mentioned this is an older version of the code.
I'm going to eventually upgrade the system to use the newer version, but until then is there another option that could help with this problem?
It was probably called ohif-studies before. We are trying to consolidate / clean up the packages a bit.
You can try to look for the studylistResult.js file and then edit the start date and end dates manually:
https://github.com/OHIF/Viewers/blob/ebe0a40f3ef5fa7d13c66e7076ddeb1f42e755b5/Packages/ohif-study-list/client/components/studylist/studylistResult/studylistResult.js#L257-L262
hello,
There is no package called ohif-studies nor a variable called dateFilterNumDays. Included in the Packages folder are
active-entry
cornerstone
design
dicomweb
dimseservice
hangingprotocols
hipaa-audit-log
lesiontracker
meteor-stale-session
ohif-core
orthanc-remote
validatejs
viewerbase
wadoproxy
worklist
Yeah I know, just look for something like 'startDate'. You can check the worklist package. If you know the exact commit, it's easier to look around.
Unfortunately I couldn't find out which commit it is. I'll just push through with the newer deployment as fast as I can. Thank you for taking the time to help me.
|
GITHUB_ARCHIVE
|
Double Layer dvd works ok in a RNS 510
Friend of mine drives a skoda octavia 2009 with Columbus gps-nav-set similar to rns 510.
He has no disc for his nav.
I downloaded cd-8195 iso ver.12 west europe which is more than 4,7 GB.
Before i try to burn it on a double layer dvd i wonder if the rns 510 wil accept burned double layer dvd's.
My own car is an Avensis which does not. The tns700 unit in my car accept only pressed ones or single layer burned ones.
The avensis-map-software since ca. 2010 does this check for PTP/OTP pressing/branding.
If the columbus / rns 510 also does this check i wil not even try the branding on a DL dvd.
Can anybody tel me from his own experience if it worked on a columbus / rns 510 ?
Thank you for your attention.
Follow up: dvd burned on verbatim r+dl with imageburn as dvd-rom at speed 6x.
We put it in his rns510 and the dvd was apparently being read.
But after a while the screen said DVD ERROR.
After that It said: " want to use dvd or hdd ? ", but before we could respond on that choice , the message was gone.
But we could program a destination, and my friend reported after his ride home that when he removed the disc when driving, the system told him that the guidance would stop.
So my friend thinks the dvd worked, but i doubt it because of the error message.
The system might have used his hdd sources after all.
Is here someone with the same experience?
You need to use a good brand and you did. Verbatim is the one to use. The RNS-510 is capable of reading -R and +R discs.
When you burned the disc, did you do a verify read afterwards? I am burning these discs always with an old version of Nero and that never failed. Also use the lowest speed possible.
I think you have a disc with a "spot" where the data is not burned right. This means that as long the RNS-510 is navigating from the DVD it will work as long as you don't hit that "spot" on the disc. But when you copy the disc to the internal HDD, it will read that "spot" and it fails.
Burn another copy and try again.
Thank you for your to the point reply.
My (retired) friend just made a three week trip in France and his rns-510 worked, so he says.
But i am not sure which data he used, i.e. the dvd or hdd.
I think we meet each other again soon (and his car) and i will try to determine the current map version written on hdd.
And update after that.
|
OPCFW_CODE
|
I have an APi that is being forced on the destination enpoint to TLS1.1 as a minimum. This has been changed recently to NOT allow TLS 1.0.
The server is 2012 server
Whne looking in the windows registry there are no keys for TLS 1.1 or TLS1.2m as per the link :-
The machine does not have the latest security updates.
My question is, although the entries are not in the registry, do I need to add them i.e TLS 1.1 and TLS1.2 to be able to allow the app to make a connection OUT using TKS 1.1 or TLS 1.2 ? The fact they are not there and we are running Windows 2012 means TLS1.1 is enabled or do I have to add the keys to ENABLE this ?
Have a read, but Server 2012 should have TLS 1.1 & 1.2 enabled by default.
Also, because this is an outgoing connection, it is a client connection. The Server part of the connection is at the far end.
To make things a bit easier as well you can use a tool to set all the registry settings for TLS and Ciphers etc according to the security you need. I have used this in the past and seems pretty good and easy to use.
I would also get the server completely up to date to make sure you aren't missing any fixes that may apply to this.
Yes but the confusing thing being, there are no reg keys for TLS 1.1 or TLS 1.2 so do I just assume it is ENABLED and if I ever need to DISABLE I have to create accordingly ?
I agree about our client going out and I can see our initial 1.0 hello going out and if I do a check on the endpoing it says TLS 1.0 is disabled so I would expect to see a 1.1 Helo go out next but I am not seeing this.
I am getting RST,ACK at our initial 1.0 connection to which we make another connection out but still on 1.0. I am guessing I should try a different version i.e 1.1 but is this a response I should get back from the endpoint I am trying to conenct to ?
So I am questioning that because the 2012 server does not have it in the registry it will not go out using this version.
If you run a check against this site and it shows that they are enabled then yes you should be fine and there is no need to add the registry entries for those. I would double check the ciphers though.
By default TLS 1.1 & TLS 1.2 are enabled on server 2012 & server 2012r2.
So they should be available and working unless you've turned them off.
My guess is that the app on your end is defaulting to initiating a TLS 1.0 connection. This is being refused by the remote server. You app should be configured to initiate its connection using TLS 1.1 or preferably TLS 1.2 by default.
I'm assuming it's something Microsoft / .net based and that it is using SChannel. If not then that's a different kettle of fish.
I am getting RST,ACK at our initial 1.0 connection to which we make another connection out but still on 1.0. I am guessing I should try a different version i.e 1.1 but is this a response I should get back from the endpoint I am trying to conenct to ?Not sure, but it is the reply that you are getting :)
RST is something forcibly closing the connection. Maybe that is messing up your app and it tried again with the same TLS version.
I just experienced an issue with one of our home-built APIs only talking TLS 1.0 and nothing higher. It turned out to be a .NET 4.0 issue. There's a registry key you can change to allow .NET 4.0 and higher to use stronger cryptography.
It might help you with your API
Don't assume they are enabled. If the keys don't exist, create them. We recently went through a PCI certification and I had to disable TLS 1.0 and all SSL versions. I then manually created the TLS 1.1 and 1.2 keys to enable it.
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2] [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client] "DisabledByDefault"=dword:00000000 "Enabled"=dword:00000001 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server] "DisabledByDefault"=dword:00000000 "Enabled"=dword:00000001
I tested by connecting to a web site that only allowed TLS 1.2 by toggling the version of TLS using on the client.
The app uses the .NET framework to control what TLS version it uses. We were using 4.0 pramework which default to TLS 1.0 for which is not allowed. To fix this, there is a registry change for the Framework to use a different protocol
We had issues on our end that API calls didn't get through to our supplier. We realised pretty soon that they had deactivated support for SSLv3 and TLS1.0 on their end. Using the tool that Adam Gadoury recommended (https://www.nartac.com/Products/IISCrypto) and using "best practices" solved our problems. :)
|
OPCFW_CODE
|
import tensorflow as tf
import os
import re
from textgenrnn import textgenrnn
import time
import click
import ujson as json
def process_tweet_text(text):
text = re.sub(r'http\S+', '', text) # Remove URLs
text = re.sub(r'@[a-zA-Z0-9_]+', '', text) # Remove @ mentions
text = text.strip(" ") # Remove whitespace resulting from above
text = re.sub(r' +', ' ', text) # Remove redundant spaces
# Handle common HTML entities
text = re.sub(r'<', '<', text)
text = re.sub(r'>', '>', text)
text = re.sub(r'&', '&', text)
return text
def train_model(infile, size, epoch):
cfg = {'num_epochs': epoch,
'gen_epochs': 1,
'batch_size': 128,
'train_size': 1.0,
'new_model': False,
'model_config': {'rnn_layers': 2,
'rnn_size': 128,
'rnn_bidirectional': False,
'max_length': 40,
'dim_embeddings': 100,
'word_level': False
}
}
texts = []
context_labels = []
print('Loading training sample from file...')
start_time = time.time()
with open(infile, 'r') as f:
for line in f:
try:
s = json.loads(line)
if 'text' in s.keys():
tweet_text = process_tweet_text(s['text'])
if tweet_text is not '':
texts.append(tweet_text)
context_labels.append(s['user']['screen_name'])
if len(texts) == size:
break
except ValueError:
print('Reached end of file!')
print("Load time: {} seconds".format(time.time() - start_time))
print('Actual sample size:', len(texts))
textgen = textgenrnn(name='./weights/twitter_general')
if cfg['new_model']:
textgen.train_new_model(
texts,
context_labels=context_labels,
num_epochs=cfg['num_epochs'],
gen_epochs=cfg['gen_epochs'],
batch_size=cfg['batch_size'],
train_size=cfg['train_size'],
rnn_layers=cfg['model_config']['rnn_layers'],
rnn_size=cfg['model_config']['rnn_size'],
rnn_bidirectional=cfg['model_config']['rnn_bidirectional'],
max_length=cfg['model_config']['max_length'],
dim_embeddings=cfg['model_config']['dim_embeddings'],
word_level=cfg['model_config']['word_level'])
else:
textgen.train_on_texts(
texts,
context_labels=context_labels,
num_epochs=cfg['num_epochs'],
gen_epochs=cfg['gen_epochs'],
train_size=cfg['train_size'],
batch_size=cfg['batch_size'])
@click.command()
@click.option('--infile', '-i',
required=True,
help='Enter the json file storing the original tweets (e.g. tweets.json).')
@click.option('--size', '-k', type=click.INT,
required=True,
help='Enter the training sample size.')
@click.option('--epoch', '-e', type=click.INT,
required=True,
help='Enter the training epoch.')
def main(infile, size, epoch):
# silence tensorflow
tf.logging.set_verbosity(tf.logging.ERROR)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# training general tweets
print('Training general tweets with sample size k = {}...'.format(size))
start_time = time.time()
try:
train_model(infile, size, epoch)
except ValueError:
pass
print("Training time: {} seconds".format(time.time() - start_time))
if __name__ == '__main__':
main()
|
STACK_EDU
|