Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Open Source Business Conference - Day One Wrap-Up by Kevin Shockey Certainly low hanging fruit for my experience is the influence of open source software in commoditizing the software industry. Starting with Kim Polese's keynote speech, "Coping with commodities in the new IT Marketplace", or as she summarized in her conclusion: "Coping with great opportunity." Kim offered up an interesting comparison of the construction industry as an analogy for what she sees happening in IT. As Robert Lefkowitz would later comment in his presentation, she completed her obligation by quoting Doc Searls for drawing her attention to the use of construction related titles in the software like builder, developer, and architect. She follows the analogy through to illustrate the commoditization of building materials, and how ultimately this enabled the creation of the largest industry in the world. So the inference is that this should happen as well with software. If nothing else, listening to the speakers today leaves one with a great sense of optimism for open source and the software industry. Something Kim offered early in her speech was the prediction that there would be more money made because of open source than from it. This is certainly true for Google and similar web sites running a mostly open source stack of software. I truly enjoyed Kim's choice of using the work of Hugh MacLeod from the Gaping Void to illustrate her slide deck. I really enjoyed the graphics, and I think it complemented her optimistic message. After this session I caught Robert Lefkowitz and his discussion of the "The paradox of choice." Robert is an extremely polished speaker with an even more polished set of ideas, relationships, and conclusions. It goes without saying that his presentation was thought provoking, if a little confusing. I'm sure he knew that some in the audience would get lost in the twisting loops of his thought process and in typical style used that to prove his point. I'm thankful for Robert for taking questions. He answered one that has been bugging me for a while. I wanted to know how software could become a commodity in the same way as other commodities like orange juice. He offered that it was not the actual software that was becoming interchangeable but the providers of the software. This clears up what I believe most people are referring to when they discuss software commoditization however it is contradictory to Kim's construction industry analogy. I think that there is a little of both going on, and maybe SpikeSource will help on the software side. Then again, maybe Tim O'Reilly's vision of web services holds the ultimate view of what software commoditization holds. When I no longer care how functionality is provided to me then I'll accept that software is interchangeable. Finishing the day was the long anticipated presentation of Geoffery Moore about why he believes open source has crossed the technology chasm. I'll have to get some sleep before I think I can give a good review of his speech. He offered some great perspectives that I hope to share tomorrow. Finally I attended the SpikeSource Town Hall Meeting. My many thanks to Robyn Forman for the invitation. This was another very deep discussion which needs special coverage. The meeting was aimed at the many CIO's in attendance and included a lively exchange of experiences and straight from the hip comments. I believe that most in attendance came away with a sense of some of the issues facing CIO's in the enterprise market. Do you share the sense of optimism?
OPCFW_CODE
ASP.net Website project vs Web Application Project: single page updates I recently revamped my companies website, creating a new asp.net Website Project (I believe). In the past, before the new site was created, I was able to do single page updates/publishes to the site with out having to recompile the entire site. Now, in order to publish changes, I need to build the site and publish the site.dll file along with all the updated files in order for the site to reflect the changes. The problem with this is that when I'm working on additions for the website that aren't ready for production, it includes this code and causes compilation errors if I do not comment it out before I publish. I still have the old project/solution. I have compared the publish settings but as far as I can tell the only real differences between the two projects is the target Framework and the editor used for the project (old project was created in VS 2017, new site was created with VS 2022) My company is currently looking into version control software which should help with this situation. Is there a certain project type or project setting that will allow me to once again do single page publishes with out having to also publish the .dll file? "creating a new asp.net Website Project (I believe)" How can this be ambiguous? I created this project over a year ago and forgot which option I selected during the process. Being as they don't have an option (from what I can tell) to view this information. I wouldn't call that ambiguous but rather ill informed. It is a challenge. If you want the ability to publish out the one page (and code file), then you can't use a "web site application". But that choice ALSO then takes off the table the use of a project file. When you create a asp.net webforms project, you have 2 choices. Asp.net web site Asp.net web site "application" And the above means that when you create a "web site", then you do NOT have a project file. This setup allows you to publish/change ONE page, and you can then send that one page up to the web server. You also of course in such a case have to send both the .aspx page, and the code behind (source code) page to the server (that would be the .aspx.cs or .aspx.vb (if using vb.net). So, this setup was in the old days often used, since it made deployment of one single page and code changes very easy. There was no need to re-publish and re-build the whole application. However, there are also HUGE advantages to using an "application". They are: You get/have/enjoy use of a .sln project file. And with a project file, then you can include multiple projects in your one project. Thus, you might have some set of class libraries that you created with all of your helper routines, or large amounts of your business logic. With a project file, then during build, then these "multiple" projects will build for you, and build with ONE build operation. In addition, assembly references and "resolution" are far better with a project file (application choice). And this setup also means that you NEVER have to include the source code files when publishing. In effect, your developer computer does the compiling of the project, strips out the source code, and only .dll's and the .aspx pages are then published and placed on the server. So, from a developer who used to a "real" development environment in which you build + compile your code BEFORE publishing, then of course an "application" is the way to go. Another great advantage is that your additional assemblies and referenced. dll’s are NOT placed in the bin folder during development. So, you can do a "clean" project and "all" of the bin folder is deleted. So, you have a clean build each time, and you can even open up the bin folder and "delete" all files (.dll's) in that bin folder. Upon a re-build then all of your correct assemblies, referenced library .dll's, and even NuGet packages are copied over and into the bin folder for a final AND CORRECT build of the site. So, your .dlls are "produced" at build time and THEN placed in the bin folder. You thus always have the correct .dlls and ONLY the final correct .dll's placed in the bin folder. With a web site, then boatload of "stuff" and "junk" piles up - including often .dll's you don't need nor want anymore. Without a project file (web site choice), then references are not resolved by the project building process, but will be resolved by the web server and it’s compiling process. In other words, IIS the web server will be compiling your code and not you anymore! This can be VERY problematic when say using the newer Roslyn compiler extensions (which may well not be installed on your target server - or worse yet, you using some web hosting plan, and you can't install Roslyn compiler extensions). So, the only real downside of the "application" choice is that you can't publish JUST ONE page. However, the list of advantages such as pre-building, removal of source code, allowing multiple projects into one project, and more still VERY much outweighs this lack of being able to update + publish one page and it's code. And as noted, no source code is placed on the server when using an "application". So, that single page publish? Well, sure, kind of nice, and you simply copy the .aspx page, and the .aspx.cs page to the server - it will automatically detect you done that, and re-compile the code for the one page. However, this means that near each web page creates it's own .dll - you have 100's of them as a result. So, what do to when you 3-5 days into new features, but then you find one bug or one page that needs fixing on the web site? Well, hopefully you using SCC (source code control), and thus you can revert back to the current published version, create a branch, make your changes, and re-publish. So, while many a sites in the past did use the "web site" option? And it's easy? Well, I find the lack of assembly resolution, and that of having to place NuGet package (.dlls) in the bin folder is a VERY messy affair, and you loose too many features without a project file, (such as being able to include other projects). So, if you not using git-hub? Then at publish time, you have to ALSO make a whole copy of your project, and that would allow you to change the one page, and then do a re-publish. So yes, the one downside is that to update one little page and one little bit of markup? Yes, you have to do a 100% full re-publish. So, in the past, you probably were using the asp.net "web site" choice, and thus you use file->open web site in place of file->open project from VS to work on that project. I simply find it very difficult to work without having a project file, and value having FAR better increased support for referencing libraries and NuGet packages worth MORE then the ability to publish one page. However, if you not using some kind of source control, then if you are 2-5 days into a bunch of changes, and THEN you need to fix one bit of code or change one page? Then you are in trouble, and of course that "one feature" that you don't have in an "application" will bite you. You simply don't have a partial or single page publish option. But then again, you never had that ability with say any desktop project either, did you? So, for 30+ years, what did developers do when working on some big desktop project, and they need to fix a single form or some code? Answer: they revert back to the current build, branch the build, make the changes, and THEN push out the changes they made to the master build. Only you can decide the advantages of an "application", but having a proper project and correct build process for any application for me wins over loss of the single page publish ability. So, one solution here is to adopt source code control. Another is to always make a copy of the whole project when you start some work that going to take more then 1 day of time. Another idea? Well, I simply "hide" or "turn off" say the new tab or button, so then the new features can be easily turned off (hidden) should the need arise to re-publish the site to fix say a bug, or change some markup. I should also point out that you do NOT open such "web site" projects by using a project file - you MUST use file->open web site, and browse to the folder. As I stated, only you can decide the value of having a project file, or having the ability to publish and push out one page + the code file is a better choice for your needs. And some caution is required here, since tools exist to "help" and convert a web site to an "application", but the reverse is not automated, and thus you have to manually change each page if you going to try and convert or "revert" your project choice back to a "web site". Thank you for your informative response. It would appear the needs of my employer are best met using the Web Site project. After a bit of testing and demoing I was able to reproduce the old procedure of publishing the site changes. There are more obvious down sides to this approach such as C# 6.0 features being unavailable. We are looking into Helix Core for version controlling but require more server resources to be able to run it. Thanks again for you helpful response, cheers. Well what I stated only applies to web form projects, not MVC projects or .net core projects. So this no project choice only applies to .net framework projects and specific to webforms projects. You don't have the no project choice if you're not using webforms. If not webforms then none of this applies nor matters. Webforms is considered legacy now anyway. None of this post applies to. Net core projects
STACK_EXCHANGE
The PageFlow Application Block, MVC4 and Visual Studio 2010 I am in the process of evaluating technologies, prototyping and potentially defining application architecture for a suite of web based applications that have been written using, ASP.Net WebForms, UIP, Unity, MVP, and a custom frameworks that wraps Entlib 3.1. Some of the problems we have with the current toolkit are: It is really hard to utilise the latest client-side technologies to build Progressive Enhancement into the presentation layer. User Controls are inherently hard to re-use and the added complexity of producing Server Controls inhibits their use on a wide scale. The master page concept does not provide an organisation-wide reuse. It needs to be customised heavily by the project teams anyway. It is extremely hard to produce accessible, compliant and cross-brower compatible HTML. There is little or no opportunity for having reusable screens (Views) across applications within the portfolio. One of the key requirements that we have is the ability to produce configurable/compose-able navigation flow. In the current architecture, UIP application block allows for that with relative amount of ease. We are evaluating the use of ASP.Net MVC4 for our future web applications. My question is this: Has anyone implemented UIP-style configurable / compose-able navigation capability with ASP.Net MVC? I came across the PageFlow Application Block: http://webclientguidance.codeplex.com/wikipage?title=Page%20Flow%20Application%20Block that is slated to solve this exact problem. I downloaded the PageFlow Application Block from the wcsf contrib project, http://wcsfcontrib.codeplex.com/, but the source code does not even compile on the VS2010. Has anyone used the PageFlow application block? Is this application block unsupported, and therefore obsolete? If you had a requirement to do configurable navigation capability for wizard style web applications, how would you do it? Sorry for the long-winded question. I wanted to provide as much context as possible. This is a good start point http://www.codeproject.com/Articles/42072/Flexible-Web-UI-Workflow-application-through-ASP-N Try with this google search: work flow foundation ui mvc This is a video showing the PageFlow in action http://channel9.msdn.com/Blogs/mwink/Introduction-to-the-Windows-Workflow-Foundation-Pageflow-sample Thanks for the links. I will have a go! I have been able to build the PageFlow Guidance package with Visual Studio 2010. The source code is available on the codeplex site: http://wcsfcontrib.codeplex.com There were a few quirks however. Here are the steps I followed: Upgrade the "PageFlow Application Block (VSTS Tests).sln" solution to VS2010 Ensure that you build the solution against the Entlib and Unity framework versions from the latest Web Client Guidance Package binaries: http://webclientguidance.codeplex.com/ Once you build this soulution, you could also upgrade and build the visx guidance package - "Pageflow Guidance Package (VSTS Tests).sln" Ensure that you have the GAT2010 and GAX2010 extensions installed on your copy of visual studio Ensure that you reference the Microsoft.Practices.RecipeFramework* assemblies from the GAX2010 There is a strange dependency between the pageflow package and the Web Client Guidance package. This is: Microsoft.Practices.RecipeFramework.Extensions.dll. You will have to get this assembly from the Web Client Guidance package binaries. Once you are able to build the PageFlow Application Block and the PageFlow Guidance Package, you are ready to go. There are couple of bugs within the PageFlowHttpModule functionality when used in conjunction with Asp.Net MVC. The relies on .aspx http extension to determine when to use the module. This is easily fixed. This Application block gives you two potential ways for configuring your pageflows within the app: Using Workflow Foundation 3.0 Using XML configuraion concept, exactly the same as UIP. Both of these have pros and cons. We are looking to stay away from the WF3.0, since it adds dependency on the old version of the Windows Workflow and the AppFabric in general. Until someone upgrades this to WF4.0, there's not much point in using it. In the mean time, the The PageFlow Application Block, MVC4 and Visual Studio 2010 is working like a charm and is doing everything we want to do with navigation.
STACK_EXCHANGE
Meet the Text Editor used by Linus Torvalds Available on GitHub and kernel.org Most people like to know what geniuses use to try to get closer to the masters. The text editor that Linus Torvalds uses is one of the Emacs family, more precisely a modified version with specific (and private) lines to your liking. A modified version of MicroEmacs he called uEmacs. The uEmacs license is free-noncomm, that is, it is free, but you cannot use it for commercial purposes. It is called uEmacs/PK , I don’t know why that PK in the name, but I read that it was incorporated by Petri H. Kutvonen, University of Helsinki, Finland. Maybe this acronym PK is the initials of the names of this guy who was quoted. According to Linus Torvalds himself, he decided to modify it because MicroEmacs had an update and modified some things he liked from version 3.9 (“The best MicroEmacs ever” — Linus Torvalds), so he created his own version that has many of the features of this version(3.9) with a few more things added by it. uEmacs is available on GitHub and also kernel.org. I can’t tell if your distribution has it in the repository, unless you have a Gentoo that has pretty much everything in the repository when it comes to developer tools! So, to install using Portage, just run the command: After that clone the repository, it can be: Or via kernel.org Now just compile and install, it’s so simple that it only comes with a Makefile Note: If there is an “error” when compiling, see the additional step, otherwise ignore it! Additional step if there is an error Anyone who understands ncurses knows that there will be an error if we don’t pass the correct parameters to compile. I, despite using the package compiled by Portage, tried to compile and got the error: As soon as I read the name of this file and a word from curses.h: tgoto , I already knew what the error was and I fixed it. If you have this same error, do this: - Open the Makefilefile with your editor - Edit the line that starts with the word LIBSthat has this content: And add the -ltinfo, like this: I even made a pull request if he accepts it will be about 10 years from now 😃 . And then compile again: make && sudo make install, the binary is The command to open the editor without any files is: To get help, run em --help, the output will be this: You can already feel that the editor is very basic, right?! 😃 In Emacs style editors you don’t need an insertion command, just start typing and the text already appears in the file! em [file-name]- Opens the file indicated in Alt + z- Save and exit; Ctrl + x d(quickly type x and then d) - Quit without saving if you press y to the question: Modified buffers exist. Leave anyway (y/n)?; And among several other commands you can consult on the MicroEmacs Wiki that also work for uEmacs. ^Xmeans Ctrl + x and Mmeans Alt . To the next!
OPCFW_CODE
Fascination About C++ assignment help The next segment we are going to take a look at, may be the implementation segment. This will likely describe what our item Greeter does. Considering that we inherited all features out there in Item, we do not require to jot down them (now it's essential to start to really feel all heat and fuzzy about object oriented programming). We only need to inform the compiler the things that make our item Greeter so Exclusive: printf("Sq. brackets following a variable name indicates It's a pointer to the string of memory blocks the scale of the kind of the array aspect.n"); To beat the limitation of just one statement, we team the statements together to type a single one by enclosing them among and . This could inside of a later chapter be spelled out in more detail, for now it is enough to are aware that every thing in between the and symbols is our method, or primary purpose. Find out how to ascertain the efficiency of your application and all about the different algorithms for sorting and looking--equally typical complications when programming. On our college we will educate our Female the methods of an actress, which equals to programming a class (we generate the strategies). When she finishes our faculty she is an actress. Her Test is our compiling phase. Pretty good tutorial Himanshu Sadly it’s not working with me. Perhaps you are able to help me out a bit. BA buys almost everything from resorts, gas, in-flight amusement and marketing to visit homepage Business furnishing BA introduce eProcurement Technique as a way to innovate this method. Each and every application you might ever publish takes advantage of atleast capabilities and variables. Features tell a system what to do and variables maintain the information with which the functions perform. In object-oriented languages, variables and features are gathered collectively in objects. Contemplate what transpires from the overloaded operator= when helpful hints the implicit item AND the passed in parameter (str) are equally variable alex. In this case, m_data is the same as str._m_data. The first thing that comes about is that the functionality checks to determine Should the implicit object by now incorporates a string. excellent one . The instance is very simple as well as easy to understand but there is a challenge same as “archana” pointed out . i’ve got the problem exactly the same . how am i able to repair the situation ? (Error : Join Unsuccessful ) 1. C Functionality: All of the parameters you handed to your functionality are going to be a duplicate within the function. Which means every assignment that you have manufactured during the operate will likely not affect the variables exterior the function, you happen to be working on the copy actually: Overloading the assignment operator (operator=) is fairly straightforward, with a person certain caveat that we’ll reach. The assignment operator should be overloaded for a member function. Not needing to copy and paste code from one file to anoter, but simply just coping a .h and also a .m file from a single challenge Listing to a different is much simpler. The char style is effective at holding any member with the execution character set. It stores the exact same kind of data as an int (i.e. integers), but commonly includes a dimension of one byte. The scale of the byte is specified because of the macro CHAR_BIT which specifies the number of bits inside a char (byte).
OPCFW_CODE
What is the role of Convolutional Neural Network? The principal applications of a convolutional neural network (CNN), which comprises one or more convolutional layers, are image processing, classification, segmentation, and other auto correlated data. A convolutional neural network (CNN) is a form of artificial neural network that is specifically made to process pixel input and is used in image recognition and processing. CNNs are effective artificial intelligence (AI) systems for image processing that use deep learning to carry out both generative and descriptive tasks. They frequently use machine vision, which incorporates image and video recognition, recommender systems, and natural language processing (NLP). These networks take their cues from biological processes, since humans use their eyes to recognize objects as soon as they are born. However, computers lack this—instead, they perceive images as numbers. In order to aid in image recognition and image categorization, CNNs provide computers "human" eyes by granting them computer vision and enabling them to absorb all the pixels and numbers they observe. The creation of a feature map through the application of activation functions aids the computer in comprehending what it is viewing. The feature map is transmitted from layer to layer so that the computer can gather additional data until it can view the entire scene. Convolution is a mathematical procedure that gives rise to convolutional neural networks. CNNs substitute this specific form of linear operation for matrix multiplication in at least a single layer as opposed to matrix multiplication. CNNs differ from other deep learning neural networks in this way. CNNs are used to read handwriting, recognize the written words, compare it to a dataset of handwriting, and more. They can categorize documents for museums or interpret handwritten documents, which is vital for banking and finances. CNNs are used by computers to detect and identify people based on their faces. They recognize faces in the image, develop the skill of focusing on the face in spite of lighting or positions, recognize distinctive traits, and contrast the information they gather with a name. CNNs have been used to identify objects in a variety of photos by categorizing them based on the shapes and patterns that they exhibit. CNNs has developed models that can recognize a variety of objects, from commonplace ones like food, famous people, or animals to odd ones like dollar bills and firearms. Techniques like semantic or instance segmentation are used for object detection. For usage in drones or self-driving automobiles, CNNs have been used to locate and identify things in photos as well as to create various views of those objects. CNN is used for automatic translation between language pairings, such as English and French, in the context of deep learning. The usage of word-for-word translation or multilingual human assistance has been replaced by the very accurate use of CNNs to translate across language pairs like Chinese and English. CNN's fundamental advantage over its forerunners is that it uses machine learning to identify key elements without human intervention. Using numerous images of cats and dogs as an example, it can figure out the specific characteristics of each class on its own. Additionally, CNN is computationally effective. It performs parameter sharing, specific convolution, and pooling procedures. This makes CNN models universally appealing and allows them to function on any device. With data that has a spatial connection, CNNs perform well. As a result, CNNs are the preferred approach for any prediction issue requiring input image data. Utilizing CNNs is advantageous since they can create an internal representation of a two-dimensional image. This enables the model to pick up on location and size in different data structures, which is crucial when working with photos. Numerous practical uses of CNN have been reported, such as biometric identification and cancer detection. CNN networks can also be used for picture captioning or visual question answering, where they take an input image and provide natural language responses about it. CNNs have even been successful at summarizing texts based on their content by locating relevant sections. You may utilize many of the CNN-based projects on GitHub to create CNN models for your own projects if you wish to learn more about CNNs. Or feel free to get in touch with us. Our labeling approach combines AI and human intellect, balancing technology and human feedbacks. It’s time for us to show you how we deal with Generative AI and LLMs at isahit! We strongly believe that humans will continue to play a crucial role in the Generative AI production process. What we call the Human-in-the-Loop in our Data Labeling/Processing industry. Humans possess unique qualities, including precision, contextual understanding, judgment, creativity, and background knowledge, which machines cannot fully replace but rather complement and enhance... The key lies in strategically integrating Generative AI into our daily operations, leveraging its potential to assist us in producing relevant content, developing outstanding products, and making informed decisions. We have a wide range of solutions and tools that will help you train your algorithms. Click below to learn more!
OPCFW_CODE
xparse's 's' argument returns \Gamma and \Delta, instead of \BooleanFalse and \BooleanTrue The xparse documentation says, But that's not what I get when I test s. \documentclass{article} \usepackage{xparse} \NewDocumentCommand{\myfunc}{s}{#1} \begin{document} \myfunc % should return \BooleanFalse; actually returns \Gamma \myfunc* % should return \BooleanTrue; actually returns \Delta \end{document} Edit: Also, \IfValueTF doesn't return 0 when * is absent. \documentclass{article} \usepackage{xparse} \NewDocumentCommand{\myfuncB}{s}{ \IfValueTF{#1}{1}{0} } \begin{document} \myfuncB % should return 0; actually returns 1 \myfuncB* % should return 1; actually returns 1 \end{document} \BooleanTrue and \BooleanFalse cannot be typeset: they are boolean variables and don't make sense outside of a \IfBooleanTF{#1} test. Try \IfBooleanTF{#1}{True}{False} @PhelypeOleinik But is it a happy coincidence that Gamma is \char"00 and Delta is \char"01? @PhelypeOleinik Fair enough, but see edit to OP for more. @campa No, expl3 booleans are \char"0 and \char"1, and that's why OP gets that output. Regardless it doesn't make much sense to use them for typesetting Careful: \IfBooleanTF, not \IfValueTF. The latter tests for the special marker -NoValue- @campa Thanks, got it. \BooleanFalse and \BooleanTrue are boolean variables not meant for typesetting. You can only use them in a \IfBooleanTF test: \documentclass{article} \usepackage{xparse} \NewDocumentCommand{\myfuncB}{s}{% \IfBooleanTF{#1}{1}{0}% } \begin{document} \myfuncB % returns 0 \myfuncB* % returns 1 \end{document} \IfValueTF can't be used either because it checks if the argument is -NoValue-, and neither \BooleanTrue nor \BooleanFalse are -NoValue-, so the test returns true always. \IfValueTF is supposed to be used with optional arguments like o and d. Under the hood, \BooleanFalse is \char"0 and \BooleanTrue is \char"1 so they take the zeroth and first character of the current font, whatever that happens to be. In the OT1 encoding, \char"0 and \char"1 are the glyphs Γ and ∆: \documentclass{article} \usepackage{fonttable} \begin{document} \fonttable{cmr10} \end{document}
STACK_EXCHANGE
RAM usage; is Debian "better"? location: linuxquestions.com - date: March 9, 2011 I've tried Ubuntu 10.10 (GNOME) and 11.04 with Unity. I've also tried Fedora 14 (GNOME) and Fedora 15 with GNOME 3. My findings: RAM usage is excessive in all cases. I have 1GB and upon startup about 400MB are used. A few tasks down the line I'm using 600 to 800MB; responsiveness starts to deteriorate and SWAP starts being used. On a laptop with 512MB Debian (GNOME) uses less than 200MB with Firefox running and with an uptime of 27 hours (many, many tasks performed/software used) compared to 30 minutes with one of the others. Is Debian just better "tuned"? Why do people say Ubuntu is slower than Debian? location: linuxquestions.com - date: November 6, 2012 It sounds like there's more to it than just Unity because this claim is made about all Ubuntu derivatives. What makes Ubuntu so different? Ubuntu vs. Debian Questions location: linuxquestions.com - date: August 11, 2009 Whats up everyone, Im kind of new to Linux, been using LinuxMint for about 6 months now and testing and trying out as many other distros as possible. Just wondering if anyone can help me in figuring out what distro to end up with in the long run. Since I am fairly well versed in Ubuntu/Debian based distros this is were Im most comfortable. However, I have read that using distros like Gentoo give you the most performance as they are built around your system itself upon installation. Is this true and why is it that other distros like Ubuntu dont do this??? Do you have to compile everything from scratch when installing Gentoo or something? Also what are pros and cons of both RPM and DEB based distros? Ubuntu 10.04=Debian Squeeze? location: ubuntuforums.com - date: February 14, 2010 I've read that Ubuntu 10.04 will be based on Debian testing branch. Debian Squeeze is now in testing state. Does that mean that Ubuntu 10.04 will have Debian Squeeze stable packages once Squeeze is out? packages from debian sid into ubuntu? location: ubuntuforums.com - date: September 10, 2009 There are finally packages of the new Electric Sheep for Debian Sid: What do we have to do to get this into Ubuntu? Ubuntu Minimal or Debian location: ubuntuforums.com - date: July 27, 2010 What are the pros of installing a minimal Ubuntu (and building it up with only packages you want/need) over a Debian (testing) install and doing the same? Lately I've noticed Ubuntu seems to come with a lot of programs I have no interest in or need for, so it seems that it is time to build Ubuntu from the command line up to X This link is what I'd use for ubuntu, but I'm just trying to decide if its worth it with ubuntu or better off with Debian (even though the debian forums do seem way dead and these forums are always alive with people for support) How to move Debian stable to testing? location: ubuntuforums.com - date: May 28, 2008 Right now I am running Debian etch on a desktop, but I am interested in moving that into the testing branch. How do I do that? I want to make it so that it will be a rolling release in that it will move directly onto the next testing version once Lenny is released. Thanks ahead of time for youre responses! How can Ubuntu pretend to be stable as based on Debian Testing or Sid? location: ubuntuforums.com - date: March 4, 2010 I've been looking at ubuntu and debian for quite a while now and I figured out ubuntu 9.10 was based on debian Sid (which is the most unstable version of debian) while the next ubuntu 10.4 will be based on debian Testing (which is between debian stable and sid in terms of stability). Just a simple question: how the ubuntu developers do to make a stable release based on a buggy debian while debian developers take much more time to release a very stable release? Debian testing v.s. stable+backports location: linuxquestions.com - date: December 6, 2010 I have a question for those of you who use Debian stable + the backports repository. Compared to testing, how up-to-date is stable + backports? Is the repository well-populated? I have received exactly zero updates from Ubuntu's backports. I'm assuming that stable+backports is a little behind testing, but AFAIK testing "breaks" more often. Is the added stability of stable+ backports worth the longer wait for new software? Is Debian Testing more secure than Ubuntu ? location: linuxquestions.com - date: February 13, 2010 Ubuntu is based on debian unstable, but there might be some time lag before security patches get to debian testing --- so is Debian Testing more secure than Ubuntu ? Page: 1 2 3 4 5 6 7 8 9 10
OPCFW_CODE
Ship code faster with Graphite. Stay unblocked on code review with “stacking” - the workflow engineers at top companies use to accelerate their development. Now available to anyone with a GitHub account. Constant context switching, too many open tabs and distracting notifications - sounds familiar? beams gently guides you through your busy workday - directly from the menu bar. Joining a call or going into undisturbed focus time is now only a keystroke away. Stay tuned! Locofy.ai helps builders launch 10x faster by converting designs into production-ready code for web and mobile. We offer 2 options in Free Beta. Locofy Lightning - 1 click design to code, powered by Large Design Models, available for Figma to web, and Locofy Classic - step by step heuristics conversion available for Figma and Adobe XD to web and mobile apps. Use either and sync code to GitHub or pull into VS code. One directory for all your no code needs, plus marketing and sales tools to help you launch, market and sell. Search and discover tools based on your project requirements or your budget with more than 30 different categories of tools. Business Marketing with Nika and 44 others use Wewaat Keep consistent data across 1700+ tools without coding. Say goodbye to scattered data and hello to new standard of data synchronization. Standardize, enrich, sync and streamline data across your toolset. Connect your marketing tools and solve the cookieless issue, build cross-channel sales outreach automations and send 10x more messages and emails or connect your online stores to global marketplaces and instantly expand your business. Ready to get your shi... data together? 🎯 The finest platform for proxy services, offering 100M+ residential proxies, including HTTP/HTTPS and SOCKS5 proxies. Ideal for multiple account management, large-scale data collection and web scraping. Design chatbots & conversational apps powered by Large Language Models (LLM). Share them freely with our community of chatbot developers, automation specialists, conversation and prompt designers. Mix & match AI with rules and a human element to bring your perfect conversation design to life 🖥⚡️💬 It's open-source and it can be both on cloud or your own infrastructure. Without knowing how to code, and for free! Back4app is a powerful and comprehensive platform designed for building and deploying scalable mobile and web applications with ease. It offers a streamlined, intuitive, and easy-to-use interface for creating applications with minimal coding, while still allowing for advanced customization and configuration. Back4app supports the most popular programming languages and frameworks, making it an essential tool for anyone looking to create, deploy & scale applications quickly and easily. Instantly answer 50% of employee questions on Jira SM with AI - no humans needed! Introducing the first and only bot built for JSM that extracts accurate information from Confluence and crafts human-like responses. Spoke is the Command Center for Product & Engineering that helps build and launch products faster with more focused communication and powerful workflows that connect Slack, Jira, Notion and the rest of your tool stack.
OPCFW_CODE
<?php function runEurovocClass($contentIn){ $conllup=new \CONLLUP(); $conllup->readFromString($contentIn); $data=EUROVOC_Classify($conllup->getText(),6,0.0,1); // runner id =1 if($data!==false){ $mtids=EUROVOC_getMT($data); $domains=EUROVOC_getDomains($mtids); sort($domains); $conllup->addFileMetadataField("eurovoc",implode("\t",$domains)); return $conllup->writeToString(); } echo "Error EUROVOC CLASS"; return $contentIn; }
STACK_EDU
Underdetermined blind source separation based on subspace representation SangGyun Kim, Chang D. Yoo This paper considers the problem of blindly separating sub- and super-Gaussian sources from underdetermined mixtures. The underlying sources are assumed to be composed of two orthogonal components: one lying in the rowspace and the other in the nullspace of a mixing matrix. The mapping from the rowspace component to the mixtures by the mixing matrix is invertible using the pseudo-inverse of the mixing matrix. The mapping from the nullspace component to zero by the mixing matrix is noninvertible, and there are infinitely many solutions to the nullspace component. The latent nullspace component, which is of lower complexity than the underlying sources, is estimated based on a mean square error (MSE) criterion. This leads to a source estimator that is optimal in the MSE sense. In order to characterize and model sub- and super-Gaussian source distributions, the parametric generalized Gaussian distribution is used. The distribution parameters are estimated based on the expectation-maximization (EM) algorithm. When the mixing matrix is unavailable, it must be estimated, and a novel algorithm based on a single source detection algorithm, which detects time-frequency regions of single-source-occupancy, is proposed. In our simulations, the proposed algorithm, compared to other conventional algorithms, estimated the mixing matrix with higher accuracy and separated various sources with higher signal-to-interference ratio. There is demo below which separate the four sources with the three mixtures. 1. Sanggyun Kim and Chang D. Yoo, "Underdetermined blind source separation based on subspace representation," IEEE Transactions on Signal Processing, vol. 57, no. 7, pp.2604-2614, July 2009.(Impact factor:2.335) 2. SangGyun Kim and Chang D. Yoo, "Underdetermined Blind Source Separation Based on Generalized Gaussian Distribution," In Proceedings of IEEE International Workshop on Machine Learning for Signal Processing, Maynooth, Ireland, pp. 103-108, September 2006. 3. SangGyun Kim and Chang D. Yoo, "Blind Separation of Speech and Sub-Gaussian Signals in Underdetermined Case," In Proceedings of International Conference on Spoken Language Processing, Jeju, Korea, pp. 2861-2864, October 2004. 4. SangGyun Kim and Chang D. Yoo, "Underdetermined Independent Component Analysis by Data Generation," In Proceedings of Independent Component Analysis and Blind Signal Separation, Granada, Spain, pp. 445-452, September 2004. Underdetermined Convolutive BSS based on a Novel Mixing Matrix Estimation and MMSE based Source Estimation Janghoon Cho, Jinho Choi, and Chang D. Yoo This paper considers underdetermined blind source separation of super-Gaussian signals that are convolutively mixed. The separation is performed in three stages. In the first stage, the mixing matrix in each frequency bin is estimated by the proposed single source detection and clustering (SSDC) algorithm. In the second stage, by assuming complex-valued super-Gaussian distribution, the sources are estimated by minimizing a mean-square-error (MSE) criterion. Special consideration is given to reduce computational load without compromising accuracy. In the last stage, the estimated sources in each frequency bin are aligned for recovery. In our simulations, the proposed algorithm outperformed conventional algorithm in terms of the mixing-error-ratio and the signal-to-distortion ratio. There is demo below which separate the three sources with the two mixtures those are convolutively mixed. 1.Janghoon Cho, Jinho Choi and Chang D. Yoo, "Underdetermined Convolutive BSS based on a Novel Mixing Matrix Estimation and MMSE based Source Estimation," in Proceedings of IEEE International Workshop on Machine Learning for Signal Processing, Beijing, China, September 2011.
OPCFW_CODE
using System; using Cabother.Exceptions.Requests; using Microsoft.Extensions.Logging; namespace Cabother.Exceptions.Extensions { public static class ExceptionExtensions { /// <summary> /// Adiciona código de erro na exceção /// </summary> /// <param name="exception">Exceção ocorrida</param> /// <param name="errorCode">Código do erro que será inserido</param> public static void AddErrorCode(this Exception exception, string errorCode) { if (exception.Data["ErrorCode"] == null) exception.Data.Add("ErrorCode", errorCode); } /// <summary> /// Adiciona código de erro na exceção a partir do evento ocorrido /// </summary> /// <param name="exception">Exceção ocorrida</param> /// <param name="eventId">Evento que será utilizado como base para o código de erro</param> public static void AddErrorCode(this Exception exception, EventId eventId) { AddErrorCode(exception, eventId.ToString()); } /// <summary> /// Retorna o código de erro a partir da exceção ocorrida /// </summary> /// <param name="exception">Exceção ocorrida</param> public static string GetErrorCode(this Exception exception) { return exception.Data["ErrorCode"]?.ToString(); } /// <summary> /// Monta InternalServerErrorException e imprimir um log de erro de acordo com os parâmetros. /// </summary> /// <param name="exception">Exceção lançada.</param> /// <param name="logger">Objeto para impressão da mensagem de log.</param> /// <param name="code">Código do erro à ser mostrado na mensagem.</param> /// <param name="message">Mensagem complementar à ser mostrada no log e na exceção.</param> /// <param name="args">Uma lista de objetos que contém um ou mais objetos para formatar com a mensagem.</param> /// <returns>Exceção do tipo <see cref="InternalServerErrorException"/> com a mensagem gerada.</returns> public static InternalServerErrorException ToInternalServerException( this Exception exception, ILogger logger, string code, string message, params object[] args) { var newEx = new InternalServerErrorException(message, code, exception); // ReSharper disable once TemplateIsNotCompileTimeConstantProblem logger.LogError(exception, newEx.ToString()); return newEx; } } }
STACK_EDU
Recruiting is one of the most important task a team leader can have. A bad hire can destroy the dynamic of a good team, and a great hire can multiply the everyone’s velocityl. I’ve tried a lot of ideas after years of recruiting software engineers, first as a technical interviewer, then as a hiring manager and finally as the lead of the entire engineering department. In this article I won’t give you a process that works out of the box, but instead a few steps to setup one for your team. When to Add a Process Some people do not believe in spending time setting up a clear recruitment process, preferring going with their instinct. I don’t disagree for some cases. For instance this makes total sense when you are recruiting a couple of people for a small structure. However, if you are recruiting at a larger scale, you will quickly see the limits of this approach. The person with the “instinct” becomes a massive bottleneck, slowing down everything. The rest of the team is unable to efficiently participate and gets frustrated. Finally the candidates can feel the issue and the conversion rate goes down. I’d say that, if you hire more than 5 people a year, it’s worth investing into a simple process. Objectives of Having A Process Defining a process helps reaching certain objectives. Here are the ones I’ve set for myself. Impacting the Candidate - Clear process and expectations. A candidate should know what they are getting into. - Pleasant experience even if this doesn’t end with a hire. Ideally someone who isn’t hired refers a friend or won’t hesitate to apply again after a year or two. - Fast turnaround. The recruitment process is stressful, there is no need to make people wait for answers on top of this. - Feedbacks during the process, both ways, to avoid disappointments. If it’s going poorly for the candidate, the recruiter should know about it. Impacting the Company - More hires at the expected quality, obviously! - Better predictability. It is useful to know what can be the time and energy required to hire someone. - Pleasant interviewer experience. We need to keep in mind that a lot of people involved in the process are usually not recruiters by trade, and therefore we have to make it easier on them. - Improve employer brand, to ease future hires. Testing The Candidate Figuring Out What’s Important Before setting up your applicant tracking system or defining your recruitment process, the most important thing is to know what kind of person you want to hire. If you don’t, you’ll end up planning the wrong interviews, checking the wrong skills and in the end wasting everybody’s time or making a wrong hire. To get there, make a clear list of skills you want to check and the kind of person you think would fit in your company/team. Then use the interviewing process to check against this. If you have it, it’s great to be able to leverage things like company culture documents or team objectives. If you don’t, consider setting these up before starting to hire. Over the past few years I’ve tried a lot of different processes, and the one thing that works is having a real funnel with objectives for each step. The exact steps can change, and there is no one optimal setup. Defining The Steps There are a lot of things that can be done, but to illustrate this, here are a few example steps that I’ve used or seen companies used: - Screening call - Take home assignment - On site pairing - Lunch with the team - Meet your future manager - Technical interview - Fit interview - Reference check You then need to order them, define the time spent on each and so on. Setting Objectives For Each Step Once you have decided on the various steps, you need to decide of the objective of each step. This is crucial, because in order to be effective you have to make sure you did test what you were planning on testing. What you really don’t want is this kind of conversation: Hiring Manager: So, how did the technical interview go with the candidate? Interviewer: It went great! I’ve given the recommendation to hire. HM: Great, we need strong developers in the team to tackle project XYZ. Int: Hm… I don’t know about that. We couldn’t really get to the coding part of the interview. HM: What do you mean? Int: The candidate didn’t really know the language we use here, so instead we chatted a bit about the company and our current projects. They seem really interested in joining and have a great attitude! Here the interviewer didn’t have clear objectives for the technical interview. This step should be focused on figuring out on determining the candidate’s technical skills, not assessing the motivation. When defining objectives, remember to: - Keep them realistic and simple . It’s very hard to get a sense of someone during a one hour interview, so don’t expect the impossible from your interviewers. - Have a set of objectives around showcasing the position as well. You are assessing the candidate, but keep in mind that the candidate is also assessing the company and the position. - Share them explicitly, it shouldn’t be an oral tradition. - Check that they are followed by shadowing some random interviews, setting up feedback sessions , etc. Personally I like to write objectives as questions to be answered. To illustrate here are a few examples of objectives to assess the candidate: - Are they motivated to join the company? - Do they show interest in solving users’ problems? - Will they work well with their manager? - Do they fit our values? - Do I see this person in my team? - Can this person fit in the organization and evolve? … and a few examples of objectives to showcase the company and/or position: - Provide an interesting challenge to the candidate - Give more context to the candidate - Showcase our work environment and mission - Address any remaining doubts the candidate could have Expectations for the Interviewers There are a lot of things interviewers need to get right, and it can take years of practice to be a confident interviewer. Here are the advices I usually share to people new to it: - Focus on the objective, rate candidates based on it only. - Always sell the position & the company, regardless of the candidate. It might not be a fit today, but it might be one tomorrow. - Let the candidate answer and don’t be afraid of silence. It’s easy to try to fill in uncomfortable blanks in the conversation. - You need to be sure of your recommendation, so dig deep to turn orange flags into red or green. - Contribute to improve the process, and share any issues that could be addressed. - Don’t mistake enthusiasm for ability. Don’t mistake quietness for a lack of motivation. - Rate regardless of time spent. If you had a 20 minutes phone call that went very well, you should give a “hire” recommendation. The hiring manager will know the context and use your feedback accordingly. - Do not share your opinion with other interviewers and do not try to read other people’s feedback. Stay unbiased. - Don’t be influenced by the context. Being understaffed doesn’t mean we should hire anybody - Have a few recurring questions, this way you’ll be able to compare candidates. Then you need to define what kind of output you want out of the interviewers, most likely a “hire/no hire” answer. I like what some applicant tracking system do, rating on a scale from 1 to 4 as it removes the ability to say “I don’t know”. - ✓ ✓ Strong yes . This person is really great at what I’ve been tasked to test. I’m very much looking forward to work with them. - ✓ Yes . This person is good at what I’ve been tasked to test. I’ve seen a couple of minor issues, but nothing too problematic. - ✗ No . This person is not good enough at what I’ve tested. I’m not completely confident it would go well if we were to hire them. - ✗ ✗ Strong no . The person is bad at what I’ve tested, or I couldn’t even test it because of major fit issues with this person. I can’t see them here. Of course this is just the recommendation of the interviewers, the final call rests into either the hands of the hiring manager or some kind of hiring committee. It’s perfectly possible for someone with a couple of “no” to get hired, if the other interviews went really well. Finally I also like to ask for a paragraph or two to explain their decision. I think that interviewing should be an opt-in process. There is little value in having someone talk to candidates if they don’t want to do it. For those who want to start helping hire, here are a few things that can help get started: - Have documentation with use case examples, question templates, interviewing techniques , etc. - Spend some time briefing the person before an interview, and more time debriefing after. - Setup some shadowing sessions, where a new interviewer follows the lead of a more experienced one. - Involve recruiters in the process to help explain interviewing techniques. - Setup reverse shadowing sessions, where a new interviewer leads an interview but is watched by a well meaning more experienced interviewer. - Gather feedback from newly onboarded interviewers to improve the process. Like everything, it’s important to measure what you are doing. A few metrics that are interesting to follow: - Time to hire - Conversion rate (segmented by step and position) - Candidate satisfaction rate (segmented by hired/not hired) - Interviewers satisfaction rate Overall recruiting is hard and as for most things there is no silver bullet. However I still hope this article will help you improve your process! Since you scrolled this far, you might be interested in some other things I wrote:
OPCFW_CODE
[Tutor] Standalone Version] alan.gauld at btinternet.com Fri Feb 22 00:58:16 CET 2008 > From: Artur Sousa <tucalipe at gmail.com> > To: Kent Johnson <kent37 at tds.net> > Sorry to bother again. > Is there a way to distribute a Python program freely on a standalone > version, in which the other person doesn't have to actually open the > Command Line and execute the .py file? If the OS is configured properly then a normal python file can be run without opening a command prompt. Just dounble click in However if you want to distribute a python program without Python being installed there are several options, the best known of which is py2exe - again for Windoze > And please, excuse me, but as english is not my native language, I > couldn't quite understand how to concatenate str and int with % In this context % is not the modulo operator but the string format The trick is to create a format string which defines the structure of your output string by inserting markers into the format string. The marker for an integer is %d (for decimal) so you could write: fmtString = "%d is a number" This creates a template to create a string containing a number: print fmtString % 42 Here the number 42 is substituted into the format string. You must provide as many values as thee are martkers in the format string: print "%d plus %d equals: %d" % (22,7,22+7) Note the string has 3 markers, all decimals and we provide 3 numeric values. There are many other marker types as well as ways of specifying the space occuipied, justification, padding etc. If you don't understand any of that please reply with specifics to the list. Author of the Learn to Program web site (still broke! :-( > ttaken = (quantity*3600)/phour > str1 = "Para produzir " > str2 = u" unidades desse recurso, serao necess\u00E1rios " > if ttaken == 1: > str3 = " segundo, ou " > str3 = " segundos, ou " > if ttaken/60 <= 1: > str4 = " minuto, ou " > str4 = " minutos, ou " > if ttaken/3600 <= 1: > str5 = " hora." > str5 = " horas." > print str1 > print quantity > print str2 > print "" > print ttaken > print str3 > print "" > print ttaken/60.0 > print str4 > print "" > print ttaken/3600.0 > print str5 > for quantity is an integer user input value and phour is another int > based on a user inputted int. > PS.: I'd also like to thank very much for all the support I've been > Tutor maillist - Tutor at python.org More information about the Tutor
OPCFW_CODE
Long Sideboards and Buffets – Room dividers can be a type of design, a practical way to separate spots, or both. You can find various types of room dividers that may be used, with regards to the need. For example, an individual attempting to partly split up a room may use a permanent or half breadth room divider, designed with racks on a single part for functionality. With the many approaches to use room dividers it could be hard to decide on which form to utilize, therefore, it might be sensible to have knowledge on the various types of room dividers to higher know what will most useful fit the space the room dividers can be applied in. Room dividers could be broken down into three basic types: permanent room dividers, improvised room dividers, and variable dividers. A lasting room divider can be something like a half width or half height wall. This means that the divider can turn out half way across the ground or almost up a wall. A wall would have racks on a single side or simply be designed to match the remaining room. Then there are improvised room dividers, this might be cabinets, a large piece of furniture as well as crops used to separate spaces. Last, although not least, there are flexible room dividers. This includes screens and surfaces, which are dividers that go in to the wall to be able to enhance room or privacy when needed. Each form of room divider serves the goal of designing and separating spaces, it surely comes down to if the average person want the choice of moving the room divider around. Long Sideboards and Buffets uploaded by Ricky B. Rich on . This 6 Awesome Photos of Long Sideboards and Buffets is part of Sideboards And Buffets gallery. Read French Provincial Sideboards and Buffets or find other pictures about Sideboards And Buffets,Sideboards And Buffets For Sale,Sideboards And Buffets Ikea,Sideboards And Buffets With Glass Doors,Small Sideboards And Buffets,Unusual Sideboards And Buffets,Used Sideboards And Buffets for more Long Sideboards and Buffets ideas. Probably you will need something larger than the usual four-panel separating screen. Everything you possibly are most likely seeking is just a big room divider that can separate a very large region in to smaller ones. If here is the event, then you should use sliding room dividers. Using sliding room dividers provides you with more versatility in a room by enabling you more options. Sliding room dividers may turn an extremely large room into many smaller rooms. Room dividers usually are portable, and using them is a great way to boost the versatility of a room by dividing it in to smaller, more useful rooms. The key advantage of using easy dividers is that they do not involve any specific teaching to set up them. All you could will have to do is always to distribute them and throw them in to place. You can use several different types of room dividers in lots of various locations in your home. When you yourself have a kitchen having an consume in food area, you might want to consider placing a room divider in between the kitchen and the eating area. This allows you to form two split up areas as an alternative of 1 start space. In addition, you can begin designing with a room divider in a studio residence to truly make unique variations between your different residing areas. A couple of monitors can assist you to produce a “room” which may be really comfortable indeed. Room dividers are ornamental along with functional. Don’t consider the divider as only a “small wall”, if you are picking your room divider, make sure to think of it’s decorative attraction as well. Nowadays, there are room dividers that could choose almost any décor. If you have an Asian theme, then you definitely might contemplate looking at some bamboo or shoji screen room dividers. An old-fashioned design will benefit from some of the lovely vintage style room dividers accessible often in woods or hand-painted scenes. A contemporary décor will appear great with a leather room divider, or if you would like anything more start for an inferior space, select a metal room divider with geometric shapes. Some room dividers have mirrors developed in. Whether they’ve complete mirrors on the sections, or smaller mirrors located strategically on each section, these types of room dividers are ideal for providing a personal dressing room space in a room or studio apartment. The room divider itself may dual as a complete size mirror and preserves you from having to place a mirror somewhere else. Another type of room divider that could add a personal touch to any room is a photo room divider. That room divider consists of panels which have slots into which you may position your personal personal photos. You are able to load it down with family images, or create a topic of say a popular holiday, or perhaps fill it with photographs of a popular place that your household wants to move to. In any event, that room divider is one that you’ll never get fed up with looking at. Room dividers are the very best equipment for dividing areas alongside making personal spaces. With a help of those dividers you can even build storage spots in your rooms, by putting it in the place of the room you will have a way to produce your room look tidier by hiding the mess behind it. Screen dividers are also most useful for kid’s room, since they are over full of all sorts of toys you can produce a perform area and dump their toys in the area developed by the dividers. When you will have a way to locate displays of each kind such as for instance wooden, glass, ornamental and lightweight and so forth, thus it will undoubtedly be not a problem for you really to find a suitable room divider for any purpose. Room dividers are a great way to incorporate an original touch to your decor therefore if you’re buying unique piece to load a spot in your house, be sure to have a look at a few of the great room dividers available. There’s one to match every fashion and taste.
OPCFW_CODE
Large language models (LLMs) are a big deal in artificial intelligence. They use huge amounts of data to understand and create text that looks like it’s written by a person. These models are part of the broader field of natural language processing (NLP) and are trained on vast amounts of textual data to learn patterns, relationships, and nuances of language. The term “large” refers to the scale of these models, which are characterized by having a massive number of parameters. This guide explores their wide-ranging applications for developers and others, key characteristics, and both the benefits and limitations associated with their use. In the end, you will also learn about the importance of effective prompts for better results. So, without further ado let’s dive in. LLMs are trained on massive datasets of text and code, often exceeding billions or even trillions of words. This vast data exposure enables them to capture complex linguistic patterns and relationships. LLMs are typically pre-trained on large datasets in an unsupervised manner, where the model learns the intricacies of language. After pre-training, the models can be fine-tuned on specific tasks or domains to enhance performance. LLMs demonstrate proficiency in contextual understanding, enabling them to take into account the context of a word or phrase within a sentence to deduce its meaning. This heightened awareness of context empowers them to produce responses that are both coherent and contextually appropriate. LLMs demonstrate versatility and proficiency in an extensive array of functions, such as: - Generating text: Crafting human-like text in various styles, such as poems, code, scripts, musical compositions, emails, letters, and more. - Translation: Precisely translating text across languages, overcoming language barriers. - Answering questions: Supplying informative and pertinent responses to questions posed naturally. - Summarization: Condensing lengthy text into meaningful summaries. - Dialogue generation: Participating in authentic and natural conversations, emulating human interaction. Characterized by continuous improvement, LLMs undergo ongoing development, resulting in constant enhancements in performance. The iterative nature of their evolution is driven by exposure to a growing volume of data and the utilization of increased computing power, collectively contributing to a relentless pursuit of improvement over time. LLMs help developers by enhancing the coding process by offering assistance in code generation, summarization, bug detection, documentation, refactoring, educational support, natural language interactions, and code translation. Their capabilities contribute to increased productivity and efficiency in software development. Let’s discuss them one by one: - Code Generation LLMs can generate code snippets based on natural language descriptions or requirements. Developers can provide high-level instructions, and LLMs can assist in translating these into functional code segments, saving time and effort. - Code Summarization LLMs can be used to summarize and explain existing code. This is particularly helpful for understanding complex codebases, as LLMs can provide concise and human-readable explanations for different sections of code. - Bug Detection and Correction LLMs can aid in detecting and even suggesting corrections for code bugs. By analyzing code snippets, LLMs can identify common programming errors and recommend fixes, contributing to improved code quality. - Documentation Assistance LLMs can assist in writing code documentation. Developers can input information or queries, and LLMs can generate detailed explanations or documentation snippets, helping to maintain thorough and up-to-date documentation. - Code Refactoring Suggestions LLMs can provide suggestions for code refactoring, helping developers improve the structure, readability, and efficiency of their code. This can lead to better-maintained and more scalable software. - Learning and Assistance for Beginners LLMs can serve as educational tools, assisting novice programmers in understanding coding concepts, syntax, and best practices. They can answer queries, provide examples, and offer guidance on various programming tasks. - Natural Language Interface for Coding LLMs can act as a natural language interface for coding, allowing developers to interact with code using plain language. This is particularly beneficial for those who may not be proficient in a specific programming language but still need to perform coding-related tasks. - Code Translation LLMs can aid in translating code between programming languages. Developers can express their requirements in natural language, and LLMs can generate equivalent code in a different programming language, promoting interoperability. Let’s now delve into the other wide-ranging applications of LLMs: LLMs are part of the broader field of NLP, where they perform tasks such as text summarization, and condensing extensive passages into concise summaries. Additionally, they demonstrate proficiency in sentiment analysis, comprehending and evaluating sentiments expressed in textual content. Furthermore, LLMs enhance the accuracy and efficiency of machine translation, enabling seamless communication across languages. Their question-answering capabilities facilitate precise responses to user queries, revolutionizing information retrieval. In content creation, LLMs are essential tools with versatile capabilities. They contribute to creative writing by generating a variety of text formats, such as poems, code, scripts, musical compositions, emails, and letters. Additionally, LLMs demonstrate their proficiency in dialogue generation, creating realistic and engaging conversations for applications like chatbots and virtual assistants. LLMs play a pivotal role in shaping the future of education and training. They support personalized learning experiences by tailoring educational content for students and employees. Additionally, LLMs aid in the development of training materials, creating engaging and informative resources. As a feedback mechanism, these models provide constructive feedback on written work, enhancing the learning process. LLMs enhance issue resolution quality and efficiency by comprehending customer queries, thereby improving the overall customer experience. Additionally, these models provide personalized recommendations, tailoring suggestions to individual preferences. LLMs prove invaluable with their pattern recognition capabilities, enabling the analysis of extensive text datasets and the identification of intricate patterns and trends. Their contribution extends to diverse scientific fields such as medicine, science, and social science, underscoring their potential to significantly advance knowledge and understanding. LLMs exhibit limitations that warrant consideration across different dimensions. First, there is a susceptibility to biases inherent in the training data, emphasizing the need for awareness and concerted efforts to mitigate biases in their applications. Second, the decision-making process of LLMs may lack transparency, potentially impacting trust in specific applications. Ongoing research is actively addressing this concern to enhance the interpretability of LLMs. Lastly, the deployment and operation of LLMs come with high costs, posing accessibility challenges for certain users due to financial constraints. Recognizing and addressing these limitations is crucial for fostering responsible and inclusive use of LLMs in various contexts. A “prompt” refers to the input provided to the model to generate a desired output. Creating good prompts for Large Language Models (LLMs) is like an art that needs clear and precise instructions. To make these models work well, you have to give them specific and clear prompts that explain what you want. Using simple and clear language with clear instructions helps the model understand what it needs to do. Also, giving extra information in the prompts helps the models give better and more fitting responses. It’s important to try different prompts and make them better based on what the model learns. Finding the right balance between being specific and flexible in your prompts helps the models understand and respond well to different things people ask, making the whole process of creating prompts really important for using these powerful language models. In conclusion, while LLMs present incredible potential for revolutionizing various aspects of our lives, it’s crucial to be aware of their limitations. The continuous development of these models promises increased sophistication, paving the way for tackling even more complex tasks. As the field of artificial intelligence evolves, the future holds exciting possibilities for the continued advancement of large language models.
OPCFW_CODE
M: Garmz.com - Fashion Incubator Brings Young Designers to Market (My Startup) - andreasklinger http://on.mash.to/dfoiZc R: heyrhett "Designers retain full rights to their work" In fashion school, they told us that it's nearly impossible to receive intellectual property protection on fashion design, since the courts have decided that people have been making clothing for a long time, so it's hard to prove that you've done something sufficiently complex and novel. R: jamesbritt Wow. That's amazing. I knew that IP protection on fashion was weak/difficult, but was unaware of the reasoning. What's bizarre is that people have been making music for a long time, too, yet courts have no trouble locking it down with copyright. I wonder if this is a notational thing. There's an established and (somewhat) robust notation for music allows it to be "fixed in any tangible medium of expression", as required by US copyright law. Is there even such a thing for fashion? R: andreasklinger There is this great TED talk about IP in fashion. <http://www.youtube.com/watch?v=zL2FOrx41N0> R: andreasklinger I am really proud that our startup has been featured on Mashable.com We really try to do a game-change for upcoming fashion talent. If you have any questions - please AMA First answer upfront: i have no idea why the have chosen that picture. It's completely unrelated to our media photos ;) R: shawndrost "First answer upfront: i have no idea why the have chosen that picture. It's completely unrelated to our media photos ;)" Your media photos don't have any beautiful women in them. (Any people at all, actually.) Since you're a fashion startup, it makes perfect sense for your marketing materials to sell that aspect of your story -- it will get you more coverage. R: andreasklinger Agreed. We sent material like you mentioned. But we need to focus more on it. R: amac Nice concept. Where is your production and distribution based? London? I remember reading about made.com, also based in London, who are doing something similar in home furnishings. I think they outsource production to China, the drawback being a fairly lengthy lead-time. R: andreasklinger We produce prototypes in Vienna and are now setting up a prototyping studio in Bulgaria. We do the serial production in Bulgaria. R: ThomPete I once learned that there are two types of entrepreneurs. Gold diggers and people who sell equipment to gold diggers. I want to congratulate you on being the ladder with garmz.com I think this is crowd-sourcing done right. Not only is the execution beautiful, it has the potential of being a really great service and I think it has potential far beyond selling clothes. 1\. Trend watching 2\. Annual reports on fashion 3\. Alternative fashion tv channel 4\. Talent hunting 5\. Educational videos 6\. Textile and other equipment shopping just to name a few. I wish you the best of luck. R: andreasklinger Thanks. R: jamesteow This is a great idea. I do think there is large barrier to entry in the industry since it is seemingly who you know versus necessarily purely vision of the designer or the aesthetic and build quality. R: andreasklinger The main problems most designers have are * missing connection to their potential customerbase * missing exposure * and a downward spiral of low volumes The problem in the fashion industry is that it is enormously hard to get from one level in your career to the next. You don't have enough customers therefore you won't be able to sell and produce enough pieces. Therefore industry manufacturers are not interested in working with you. Therefore you work with by far more expensive tailorshops next door. You cannot get materials and even if you can get them you don't get the needed prices. This sums up to bad production prices, which lead to horrible endcustomer prices. Which leads to low customer conversion and therefore to the fact that no botique is really interested in working with you. Which means you won't be in front of enough customers and therefore won't sell enough and won't raise your volume into the scale needed. Most successful fashion designers had economic partners, who supported them. We try to be that to the crowd by aggregating production, taking the cashflow risk and using our production network. TL;NR: Fashion production below several hundred pieces is highly expensive. Garmz links them with customers and aggregates the needed volumes, produces and sells the pieces in its webshop. R: jamesteow Thanks for the info. Good info to know to pass onto some of my aspiring fashion school/designer friends. R: andreasklinger Thanks R: TooSmugToFail Hey man! I met you guys at Seedcamp in Zagreb... Glad to hear you're doing good! R: andreasklinger Cheers! :) R: permanentmarker Not sure. Wouldn't buy by any designer...
HACKER_NEWS
Ruby on Rails is the programming language and framework combo platter that effortlessly brings ideas to life. Whether you’re looking to build the next greatest application or a new website, this is the application millions use to streamline and simplify their projects. This blog answers the questions: what is Ruby on Rails, what it’s used for, and how Coding Dojo can help you start a new career path in tech. What is Ruby on Rails? Ruby on Rails (RoR) is open-source full-stack framework software specifically to build different web applications. Ruby on Rails has two parts: Ruby- The general-purpose programming language that’s super versatile. Rails – Frameworks for creating websites, apps, and systems. It’s almost like an entirely user-friendly default structure, making it convenient to build anything. What is Ruby on Rails Used For? Many people don’t know that Shopify, that’ right Shopify, uses Ruby on Rails for its infrastructure. The reason is simply that Ruby on Rails has tons of gems (plugins or extensions) made specifically for eCommerce platforms. If you’re looking to add features like a help desk, payment gateways, and email campaign platforms, these things can all be built-in without coding knowledge. Perfect for beginners! Social Networking Apps Between being so easy to use and tons of gems (plugins and extensions), Ruby on Rails is a top choice for building any social networking app. By having to do little to no coding, along with tons of reliable features that are tried and true, using Ruby on Rails is a no-brainer. Content Management Systems (CMS) Ruby on Rails is the answer for anyone looking to create a content-focused website. With so much information available, anyone can quickly build their website with ready-to-go features that are great for creating and distributing content with the help of libraries full of different gems (plugins and extensions). Ruby on Rails Architecture: Model-View-Controller (MVC) The model component handles all database communications and business logic (information exchanged between the database and website interface). This component is linked to a database, usually containing an application, such as showing what orders a customer has pending. The view component presents all the user interface graphics containing presentation logic. In this part, the focus is only on displaying pages on the website; none of the code here deals with retrieving or storing information from within a database. The view component’s work is officially complete once the user can see the data; it’s just that simple. The controller component looks after the user interface and application. In this part, the controller is like glue between the application’s data (model), the presentation layer (view), and the web browser. The controller is responsible for gathering all information from the web browser request and then updating data in the model component. The Ruby on Rails Design Philosophy Don’t Repeat Yourself (DRY) The first design philosophy for Ruby on Rails is “Don’t Repeat Yourself” (DRY), which emphasizes not writing repetitive code. That way, the infrastructure is always maintainable, scalable, and easy to debug. Convention Over Configuration (CoC) Ruby on Rails is always about convention over configuration, focusing on whatever makes the programmer’s life easier. These conventions make it easier for anyone with little to no coding experience to create a website independently, while RoR handles everything. Why Use Ruby on Rails? Ruby on Rails uses an open-source framework and is free to use for anyone, also, from a developer’s standpoint, it’s super easy to use. Plus, with so many gems available for add-on features, it’s perfect for saving time while tailoring your website or application. Easy to Learn It’s easy to feel overwhelmed and intimated when you’re first learning how to code, especially when you’re still trying to wrap your head around new concepts. When it comes to Ruby on Rails, Ruby, the programming language, is super readable and similar to the English language. Ruby on Rails framework has default settings for all the necessary security features. While you use Ruby on Rails, you’ll be following a secure development process without even knowing it. What makes Ruby on Rails stand out compared to other programming languages is its structure similar to the English language. Ruby on Rails makes the process more streamlined and easier for anyone looking to build in features to their website or application. The flexibility of Ruby on Rails simplifies the frontend and backend, making the creation of web applications more accessible. Typically, a web application might need to use Rails on the backend, but it can easily use another programming language on the frontend. These capabilities give developers the freedom to harness Ruby on Rails based on what suits their needs, seamlessly blending different coding languages. Ruby on Rails is super scalable; developers can run the same code on different platforms. The scalability of Ruby on Rails gives applications or websites the power to handle more web traffic while operating perfectly, ideal for social media networks or streaming services. With Ruby on Rails being open-source software, there’s a massive amount of other developers out there who are willing to share their code, saving you time and the headache of starting from square one. Anyone just starting on their coding journey can relax knowing that there’s a network of experienced developers ready to help out. Web Apps Built with Ruby on Rails Airbnb’s the world’s number one choice when it comes to connecting travelers with spots for lodging, built using Ruby on Rails. This web application started in 2008 and has services available in 65,000 cities worldwide in over 191 countries. GitHub is the top choice for source code management. This platform lets developers host and review code, share past projects, and swap code for web applications with colleagues worldwide. GitHub’s platform uses Ruby on Rails to let developers share their work and collaborate, currently with over 66 million ongoing projects. When it comes to e-Commerce websites, Shopify’s a leading choice. This software-as-a-service (SaaS) business model uses Ruby on Rails as its base programming language. It lets anyone effortlessly design their online store and even offers marketing and SEO support. Zendesk’s customer support software works off of Ruby on Rails, making it convenient for clients to contact a customer support representative for specific companies. Zendesk’s clients are Shopify, Airbnb, Uber, and Tesco (a UK grocery chain similar to Costco). Hulu’s one of the top video streaming services available, established in 2008, boasts over 12 million US subscribers. With a wide selection of different shows, movies, and content, this streaming service is a top pick for many; it might be surprising to learn that Hulu works off of Ruby on Rails. Kickstarter is the go-to platform for any company seeking funding for a new project or product with over 150,000 successful past campaigns. With the combination of a user-friendly interface, it might be surprising that such a popular website uses Ruby on Rails for its foundation. Is Ruby on Rails Dead? The short answer: no. The best part of learning Ruby on Rails is that it takes out a lot of the guesswork and troubleshooting that can go along with other coding languages. In contrast, Ruby on Rails code has standard default settings, making life easier. Ruby on Rails can be the answer you’ve been looking for to break into the world of tech. What Is the Future of Ruby on Rails? Ruby on Rails might not be the new kid on the block, but it’s far from dying out. As one of the top programming languages available, Ruby on Rails has proven to be a solid option for that next web application. Between its ability to be highly scalable, a vast library of gems, and cost-effectiveness, this programming language will probably continue to be a top choice for developers everywhere. Should You Learn Rails? If you’re new to coding and the tech scene, Ruby on Rails can be a great programming language to learn to get your feet wet. Learning to code can seem daunting at first; the benefits of Ruby on Rails are that it uses a human-friendly design, making it a great starting point for beginners. In no time, you’ll go from wondering “what is Ruby on Rails?”, to an expert. Don’t Wait, Get Started Today! Ever wonder what is Ruby on Rails? Maybe you’re looking to break into tech? In just 14 weeks, you can become an experienced Ruby on Rails developer with the help of Coding Dojo’s online bootcamp. Get started today and apply today. Coding Dojo cannot guarantee employment, salary, or career advancement. Not all programs are available to residents of all states. REQ1919193 3/2023
OPCFW_CODE
/******************************************************** * * * Copyright (C) Microsoft. All rights reserved. * * * ********************************************************/ #pragma once #include <memory> #include <CanvasTex/Format.h> namespace CanvasTex { //--------------------------------------------------------------------------------------------------------------------- class TextureMetadataBase { public: class TextureMetadataImplementation; }; //--------------------------------------------------------------------------------------------------------------------- class ConstTextureMetadata : public TextureMetadataBase { public: ConstTextureMetadata(); ~ConstTextureMetadata(); ConstTextureMetadata(const ConstTextureMetadata& other); ConstTextureMetadata& operator=(const ConstTextureMetadata& other) = delete; ConstTextureMetadata(const std::shared_ptr<const TextureMetadataImplementation>& impl); size_t GetWidth() const; size_t GetHeight() const; size_t GetDepth() const; size_t GetArraySize() const; size_t GetMipLevels() const; TextureFormat GetFormat() const; bool IsCubemap() const; const TextureMetadataImplementation& CGetImplementation() const; private: std::shared_ptr<const TextureMetadataImplementation> m_impl; }; //--------------------------------------------------------------------------------------------------------------------- class TextureMetadata : public TextureMetadataBase { public: TextureMetadata(); ~TextureMetadata(); TextureMetadata(const TextureMetadata& other); TextureMetadata& operator=(const TextureMetadata& other); TextureMetadata(TextureMetadata&& other); TextureMetadata& operator=(TextureMetadata&& other); TextureMetadata(const std::shared_ptr<TextureMetadataImplementation>& impl); TextureMetadata(const ConstTextureMetadata& constMeta); operator ConstTextureMetadata() const; void SetWidth(size_t width); size_t GetWidth() const; void SetHeight(size_t height); size_t GetHeight() const; void SetDepth(size_t depth); size_t GetDepth() const; void SetArraySize(size_t size); size_t GetArraySize() const; void SetMipLevels(size_t mipLevels); size_t GetMipLevels() const; void SetFormat(TextureFormat format); TextureFormat GetFormat() const; bool IsCubemap() const; TextureMetadataImplementation& GetImplementation(); const TextureMetadataImplementation& CGetImplementation() const; private: std::shared_ptr<TextureMetadataImplementation> m_impl; }; } // namespace CanvasTex
STACK_EDU
I need some minor css changes done to my wordpress website This project was awarded to Excellent worker. Did the job much quicker than I had expected. I will definitely be hiring again and again. Thank you very much for a perfect application. [10 October, 2015] Did a really good job and finished it very quickly. I\'m incredibly impressed. I didn\'t think the application could have been finished that fast. I\'ll definitely be hiring you again. Looking to make some money? - Set your budget and the time frame - Outline your proposal - Get paid for your work Bids on this Project Zielona Gora, Poland I am an experienced Website Designer and Developer specialized in Wordpress, Magento, Joomla and Drupal Programmer looking to be hired. I have worked in the areas of CSS, Drupal and HTML for many employers and companies around the world I have a strong foundation in these areas. If my qualifications are suitable for you, please consider me for your next job or project. I am ready to be hired by you today and start work. languages: c/c++, python, assembler, vb, html, css, js. platforms: arduino, raspberry pi 1/2. programs: ms office, multisim, labView, photoshop, maya. OS: windows 7,8,10,server; linux, linux server; mac; android; ios; other skills: electronic, computer hardware; android programming (NDK); We are India based website design & development company since 5 years effectively deliver and concentrate in high quality website designing, web development, Software Consulting , specializing in creating custom web designing, web development, E-commerce development, web solutions ,search engine optimization & social media marketing solutions.. If you're looking for an award winning website Development Company then you are at right place. We serve all business sectors at very competitive rates all over the world; our strength is quality and deadline. We have the team of skilled & professional website designers and web developers. We have an excellent team of WordPress Developers and Graphic Designers, We have a long experience in building custom WordPress themes, and modifying them to whether you need the functionality of a website or as a WordPress blog. we have an team of WordPress Experts and Magento Experts . I will keep you updated along the way to let Service Description Custom Theme , Psd to Wordpress , Wp-Ecommerce , Woocommerce , Cart66 , S2member , Wishlist , Worpdress Theme Customization , Plugin Customization , Plugin Development , Thesis Theme , Themeforest , Hebrew Conversion , LTR to RTL ,Responsive Website , Mobile Website , Genesis Theme , Headway Theme , Studiopress Theme , OptimizePress Theme , Shopperpress Theme , Buddypress , Classipress , Business and Property Listing , Multilingual Website , Wordpress Multisite , Agentpress , Geo Theme,Responsive websites , mobile websites etc Very passionate and skilled Web Developer Who has knowledge of many programing languages to help in providing you with the best quality work I am a Junior HTML and CSS Developer. If you need to convert your dreams into a real web-site, if you have enough time to wait for pixel-perfect coding and you want to get a flexible/fluid site, that looks great on all types of screens, than hire me and you will not be disappointed. Professional qualities and skills: - HTML5 - CSS3 - Pixel-perfect, cross-browser and flexible development - W3C Valid and Optimized CSS - SEO Semantic Coding - Logical, thoughtful hand-code - Table-less coding (I'm using tables only if they are really necessary) I've had many years of programming experience in AJAX, CSS, C/C++, HTML, Java, MySQL, PHP, XHTML, XML. I can communicate live to you by MSN/GoogleTalk/Yahoo/ICQ/Skype or any other popular IM. Feel free to check my past reviews. Most of my clients leave projects fully satisfied! Not only I can fix, debug programming code and make it work for you but also I can design websites from scratch, enhance website functionality/features and implement new features, plug-ins for your existent sites. Web development is my passion. I have 3 years of experience in the field of web development. Expertise in HTML 5, CSS 3 , PHP, MySQL, Java Script, JQuery, WordPress, Theme Customize and CMS. web designer, software developer, database designer, c# and miscellaneous js libraries depends on the recommendation
OPCFW_CODE
Boskernovel War Sovereign Soaring The Heavens novel – Chapter 3109 – Creating a Little World gamy hydrant recommend-p3 republic commando hard contact pdf español Novel–War Sovereign Soaring The Heavens–War Sovereign Soaring The Heavens Chapter 3109 – Creating a Little World check ball “The Small Planet shall be chilly and empty just like a Spatial Diamond ring at the beginning. To make it far more active, I’ll have got to personally get soil in there and transplant whatever crops I want.” ‘Insignificant… Now, I am much too insignificant. Even after I developed into a Ten Directions Celestial Duke with the increase out of the Heaven Sacrificial Divine Fresh fruit, I’m still much too poor.” “It might be attainable. Don’t neglect they have 99 Divine Veins. Additionally, the primary department from your Divine Plant of Living has recently recognized him as its’ excel at,” the Turmoil Divine Fire reported at this point. ‘Is he really as arrogant and unruly as being the legends say? If he or she is, will he concern the Jade Emperor so he could get to be the new Perfect Emperor in the Jade Emperor Paradise?’ Right after becoming a Ten Information Celestial Lord, the following stage was to be a Celestial Emperor. In the same way, Celestial Emperors were actually separated into ten sub-periods, and then there was obviously a huge difference between a regular Celestial Emperor and a t.i.tled Celestial Emperor. Even among t.i.tled Celestial Emperors, there are better models and weakened ones. Effective t.i.tled Celestial Emperors have been creatures like Sunlight Wu Kong, Yang Jian, and the Heavenly Emperors who determined Devata Realms. Swoos.h.!.+ Swoos.h.!.+ Swoos.h.!.+ Just after taking a strong air, Duan Ling Tian controlled his Spirit Energy and pierced the baseball of compressed Celestial Starting point Vitality from it. He did not obtain the method tough because he possessed carefully analyzed the ways for any Celestial Duke to establish a Minor Entire world. Eventually, Duan Ling Tian picked up the Remembrance Celestial Talismans, a single following yet another. “This…” Duan Ling Tian was perplexed through this abrupt progression. what is the spinning wheel ‘Is he really as arrogant and unruly because the stories say? If he or she is, will he obstacle the Jade Emperor so he could get to be the new Divine Emperor with the Jade Emperor Heaven?’ As an example, the Yan Hill Mansion inside the Equal Heaven Territory where he was at this time keeping yourself and the Serious Nether Mansion from the The southern area of Paradise Territory where he acquired previously stayed were actually only a modest area from the Jade Emperor Paradise along with the Nature Overarching Paradise respectively. Subsequently, Duan Ling Tian picked up the Storage Celestial Talismans, one particular right after another. It was not unusual for your Chaos Divine Fire to stay private, but Duan Ling Tian was astonished in the event the Chaos Divine Entire world remained private as well. “Chaos Divine World, aren’t you intending to tell me concerning the astonish?” “This…” Duan Ling Tian was baffled by this rapid development. History of the Expedition to Russia “I’ve learn about the strategy to produce a Very little Environment while I is in the Efficiency Sect’s library… I will check out making an individual now.” Right after, Duan Ling Tian believed just like his 99 Incredible Veins ended up boiling as Paradise and Earth Spirit Vitality externally surged into them. The Paradise and World Character Power transformed into Celestial Beginning Energy that was instantly brought to his heart and soul through his Divine Blood vessels. To get even more correct, it was dispatched to the Minimal Entire world he was generating. Typically, slightly World developed by a Celestial Duke was lightweight. To be precise, the tiny Worlds they developed ended up positioned in their bodies. Much like a Spatial Engagement ring, they could keep their items within. All the difference was Minimal Worlds have been efficient at positioning existing factors. Very little Worlds put together by Celestial Lords and Celestial Kings could previous forever when they stayed a key with out one, apart from the creators, joined them. Even so, when other folks stepped feet in a Minimal Entire world, it will set out to exhaust its vitality until it eventually vanished. If Duan Ling Tian employed his consumable Noble Class Celestial Tool to get the sturdiness of your An individual Basic Celestial Lord and used it to develop a Small Environment, the small World could be similar to Little Worlds made by Celestial Lords. However, it could breakdown quickly. This has been since he would have to nourish it having a Celestial Lord’s Celestial Beginning Power and Heart and soul Energy, for a time period of time before it stabilized, in which he could not maintain the strength of a Celestial Lord considering that the consumable Noble Level Celestial Weapon’s ability would dwindle with every use. On the other hand, the info within the Recollection Celestial Talismans had decorated a much better photograph on the vastness of your Devata Realm to Duan Ling Tian. It was subsequently larger than he obtained envisioned! love at first sight season 1 Various opinions came out in Duan Ling Tian’s mind. He could not assistance but speculate if Sunlight Wu Kong would concern the Jade Emperor to the place of Perfect Emperor on the Jade Emperor Heaven. He felt slightly ecstatic thinking about it. If Sun Wu Kong prevailed, there seemed to be no doubt this news would spread all through the Jade Emperor Heaven. Also, such fabulous headlines would not only be confined to the Jade Emperor Paradise. Even when he possessed already went back to the Character Overarching Heaven, he was selected news flash would journey to the Spirit Overarching Heaven too! To help relieve his head, Duan Ling Tian s.h.i.+fted his awareness of opening up just a little Environment, a expertise exceptional to Celestial Dukes and those who were definitely stronger. Normally, there were distinctions between Minor Worlds developed by more powerful Celestials and less strong Celestials. “The odds of it taking place is low… There’s no part of referfing to it now if it fails to occur. In the event it is successful, it’ll become a amaze to suit your needs then.” The Turmoil Divine Earth’s childish voice rang in Duan Ling Tian’s thoughts. Its reply only produced much more inquiries happen in Duan Ling Tian’s brain. psalm of lament prayer “I finally gotten to the key stage… As reported by the reports I examine, I should pierce this soccer ball of compressed Celestial Source Energy with my Spirit Strength. The Spirit Vitality will sketch power from the Celestial Origins Vigor at that time to produce a Very little Environment. During the process, my Soul Electricity ought to be in equilibrium with all the compressed Celestial Origins Vitality.” ‘Is he really as conceited and unruly because the legends say? If he or she is, will he problem the Jade Emperor so he could get to be the new Perfect Emperor from the Jade Emperor Paradise?’ foods that will win the war and how to cook them These Ability to remember Celestial Talismans acquired reported all sorts of anecdotes, serving as an informal approach for him to pa.s.s time. “Near my center?” Duan Ling Tian was inquisitive. “Why?” Immediately after going for a strong inhalation, Duan Ling Tian controlled his Heart and soul Power and pierced the golf ball of compressed Celestial Source Energy using it. He did not look for the approach challenging since he acquired carefully learned the ways to get a Celestial Duke to make a Very little World. With all the info he obtained from the Remembrance Celestial Talisman, it was simple enough for Duan Ling Tian to shape this out. the life of lord byron It was subsequently not abnormal for the Mayhem Divine Flame to be silent, but Duan Ling Tian was amazed if the Chaos Divine Earth remained calm at the same time. “Chaos Divine World, aren’t you going to tell me about the astonish?”
OPCFW_CODE
Keyword match exactly for words followed by non-character word Supposed I want to highlight all the words "night" in the following sentence: "Every night, I like to watch the moonlight. Especially tonight, it's not every night that we have such a beautiful night." If I use options = {accuracy: "exactly"}, I get the following sentence, which is not what I want. "Every night, I like to watch the moonlight. Especially tonight, it's not every night that we have such a beautiful night." But if I use options = {accuracy: "partially"}, I also get an undesired result: "Every night, I like to watch the moonlight. Especially tonight, it's not every night that we have such a beautiful night." The result which I want is: "Every night, I like to watch the moonlight. Especially tonight, it's not every night that we have such a beautiful night." Did I miss something in the docs? Hi @fesnt, Thanks for reporting this issue. You didn't missed something, this behavior is as expected. The documentation says: When searching for "lor" only those exact words with a word boundary will be marked. Word boundary in this case means either a blank or the end/start of the phrase. That said, it would mark night,, if it wouldn't be followed by a , but a blank. Nevertheless, I already thought that sometime someone would want exactly what you've just described. So I am willing to offer a solution. Instead of making a blank necessary, we could offer an additional option that allows either a blank the end/start of the string or a comma and dot What do you think? There might be more characters that should be allowed additionally to comma and dot, e.g. a dash or hyphen. The question is where is the limit!? Hello @julmot, thanks for the reply, it was really fast. I think there are lots of characters that should be the limit, maybe we could generalize to any non-word character followed by a blank one as the limit, but I'm not sure if this would cause some problems depending on the word. Take a look at this, I was playing around trying to find some solution: http://regexr.com/3dffj My English knowledge is limited, and I don't know if this generalization would fail in some cases. Another problem is that I'm based on English for this, probably if we think on another language this pattern would break in some cases. I don't really know, just thinking out loud here. What do you think? @fesnt I think using \W isn't a bad idea, but this will not work with diacritic characters, as they aren't in the range A-Za-z that is matched by \W. So we need a character class for the normal ASCII word characters + unicode. As JavaScript doesn't offer a character class for this, I think the only solution is to match it the other way around: special characters. I'll play around with it tomorrow and let you know once I have something new. @fesnt Here is my result. Using the character class \w will not match diacritic characters like ö, ä, ü, etc. so we can't use it Manually defining all word characters including unicode will be almost impossible, as it's such a wide variety The other way around – matching all special characters – will be hard too, as there is no character class and e.g. these emoticons are also unicode "special characters" (non-word characters) So, as I don't want to include all diacritic characters nor all special characters in a regex and there is no character class to match those, I think the only way would be to let the user specify the characters itself. That said, with an option the user could define custom characters that will be handled as a word boundary. For example: var options = { "wordBoundaryCharacters": ",.-–" }; ``` What do you think? [Unicode in JS RegExp](http://stackoverflow.com/questions/280712/javascript-unicode-regexes) I think it solves all of my problems. It serves both as a start/end boundary, right? Because otherwise we would have a problem with quotes, or any non-word character that comes at the start. Alright, then let's take this idea. I'll start implementing it soon. However, I'm still thinking about a smart and tiny option name. The above option name is way too long. The only tiny one that I could come up with is: limiters or wordLimiters. Coming up with names is not my thing. Hope it gives you some ideas. +1 on the inclusion of quotation marks (", ”, “) I'm not sure if apostrophes would be appropriate but I'll raise that, too. (night's?) I'll give my use case to show you an example. When a user clicks a word in a given text, mark.js highlights all occurrences of that word. I and I'm are two distinct words, because I'm is the contraction of I am. So, in my use case I would not highlight night's, because it is a different word of night. However, this decision of distinct words is entirely arbitrary, I can see another cases where you would highlight the way you suggested. But how to give this option? @fesnt @rhewitt22 From my point of view, both of your use cases could be solved with specifying custom limiters. When specifying the limiter ' then "night's" will be highlighted and "I'm", when not specyfing ' nothing will be highlighted. If this doesn't solve your use cases then please let me know. I, too, need an option similar to {accuracy: "exactly"} but ignores all adjacent punctuation and will mark night whether it's night, night's, "night", (night), night!, night?, or night. It sounds like the limiter option should work. Looking forward to its implementation. Thanks for your input @rainerschuhsler. And you're right, with specifying all punctuation characters this should work. The only problem I have with this is that every user needs to find out all characters he needs himself and add them to the initialization of mark.js. It would be cleaner to have a simple yes or no option and make the rest for him. But as @fesnt suggested, this is not what every user wants. So I don't see any other way. Just want to let you know that this is the next point of my to-do list. @rainerschuhsler @fesnt @rhewitt22 Just implemented this in v6.4.0. Let me know if you encounter problems. It works perfectly! Thank you so much @julmot!! In case anybody else is trying to limit most punctuation like me, I used the following option: 'accuracy': { 'value': 'exactly', 'limiters': ['!', '@', '#', '&', '*', '(', ')', '-', '–', '—', '+', '=', '[', ']', '{', '}', '|', ':', ';', '\'', '\"', '‘', '’', '“', '”', ',', '.', '<', '>', '/', '?'] } @rainerschuhsler Thanks for this stack of limiters. Do you have any basis for this? I'd like to mention something like that on the website, in case others need the same. But I'd like to be sure that it includes really all punctuation marks. Btw: I think you don't need to escape '\"' as you're putting " inside '. Thanks @julmot, this is really helpful.
GITHUB_ARCHIVE
Verified Expert in Engineering Tiberiu is a senior developer specializing in building complex architectures on the back end with elegantly simple code and easy-to-use UXs for the front end. His main stack consists of React, Vue, Node.js, and Flask. He worked as a freelancer for eight years, working all around the globe. He's delivered projects for Precision Drilling, Reserve, Bayern AG, Airbus Defence & Space, Enel, and Topflight Apps. Tiberiu has also founded two startups and is now getting back into freelancing. Visual Studio Code (VS Code), Docker, Linux, Ruby The most amazing... ...project I've had was Reserve, scaling from four countries to 25, covering more than 75% of Latin America. I was the release manager and lead developer. Senior React Native Developer - Managed the release process for the app, ensuring timely delivery to customers. Implemented automatic deployments for mobile pipelines for our 100,000 members. - Adapted the application for the Mexican audience through strategic STP integration, contributing to increased user appeal and engagement. - Created reports based on deployment metrics and integrate them into the process to speed up the release process. Coordinated with stakeholders to ensure seamless code pushes. - Spearheaded the refactoring initiative, implementing a new design system across the entire application for a cohesive and modern user experience. - Implemented blocking notifications in just two weeks to inform users about some breaking changes inside the app. - Reimplemented the connection to the back end using openAPI instead of generic fetches, thus improving the collaboration between the back-end and front-end workers. - Improved the security of the application by implementing SOC2 compliance. - Implemented custom secure and persistent state management for the wallet application. - Accelerated the application's build time by an impressive 10x by transitioning from webpack to Vite as the build system while also eliminating create-react-app scripts. - Optimized application performance by locally implementing all components, removing antD, and significantly reducing bundle size, leading to faster loading times. - Devised and implemented an innovative state management logic within the URL, streamlining data passing through the company and enhancing overall efficiency. - Pioneered the development of a robust mechanism for generating reports, empowering data-driven decision-making processes. - Founded a fintech startup with a team of four that offered a SaaS and a business model comprising a B2B and B2C. The B2B part allowed all the banks and the fintech to coexist in a shared space, and the B2C brought customers to this space. - Oversaw and created the architecture of the app and thought about how the microservices are going to interact with each other. Transitioned eventually to the front-end part because there was a need at the time. - Worked in a hectic environment and built a very exciting product. Showcased it to some strong players in the fintech world, one of which is the director of Banca Transilvania and the one from Banca Feroviara Romana. - Built a company with the principal value proposition of putting all the banks and financial institutions on common ground by implementing APIs from the banks that accepted European open banking regulations. This built a free fintech market. - Developed a reliable front end with an emphasis on improving the user experience by optimizing the flows that the user has in the application. - Solved bugs in the codebase that were directly impacting the clients of the application. - Implemented strict checks for TypeScript in the codebase. - Developed multiple applications in my time there as this company specialized in creating MVPs: an online store app, an app for self-monitoring a COVID-19 infection, and an interface for tracking the prices of cryptocurrencies. - Tracked and fixed bugs using Clockify and Clickup as reporting tools. Then, we transitioned to using GitHub for organizing our projects. - Oversaw a project team with three other employees tasked with creating HIPAA-compliant applications from the ground up. Software Engineer | Partner - Improved airplane simulators and ensured they met FAA standards, including EET upgrading. - Developed innovative machine-learning techniques for flight simulators. - Built a billing service for ENEL in a project alongside my team. This project is especially amazing during the pandemic when everything moved online. - Maintained and upgraded Boeing, CAE, and Airbus airplane simulators. P4 Deep Learning Detection of Network Intrusions I am now studying the impact of Markov chains in network anomaly detection. • You can connect to a server • You can video chat with others in a single room • You can have multiple rooms and let the people connect to them Signature Forgery Detection Toolhttps://github.com/tibi77/simple-signature-analysis Presentation Application for the National Botanical Garden The project was the MVP for the National Botanical Garden in Bucharest, and it continues using the code that I and my team have written for it. ECMWF IFS Python Bindingshttps://github.com/esowc/openifs-scm-python Tools and Examples for Visualizing HPC Performance Data for the IFShttps://github.com/esowc/HPC-performance-profiling-tool Express.js, React Native, Redux, Flutter, Next.js, Tailwind CSS, OAuth 2, Electron, Flask, gRPC React, Node.js, REST APIs, Turf.js, MobX, D3.js, Redux-Saga Figma, Firebase Analytics, Webpack 4, Webpack 3, Sentry, Visual Studio App Center Parallel Programming, REST, Web Architecture, Clean Code Linux, Firebase, WordPress, Mobile, iOS, Android, Mapbox, Docker, Kubernetes, Amazon Web Services (AWS), Visual Studio Code (VS Code), Azure, Webflow AmCharts, Full-stack Development, Full-stack, Storybook, Minimum Viable Product (MVP), Architecture, Progressive Web Applications (PWA), Front-end, Mobile First, CI/CD Pipelines, IT Security, German, AI Programming, Distributed Software, Distributed Systems, P4, Global Product Management, Visx, PWA, Vite, Application State Management Master's Degree in Computer Science University Polytechnic of Bucharest - Bucharest Romania Bachelor's Degree in Computer Science Engineering University Politehnica of Bucharest - Bucharest Romania Linux Professional Institute Certification Linux Professional Institute Deutsche Sprach Diplom How to Work with Toptal Toptal matches you directly with global industry experts from our network in hours—not weeks or months. Share your needs Choose your talent Start your risk-free talent trial Top talent is in high demand.Start hiring
OPCFW_CODE
GodMode: Unable to install and run beta-1 / beta-2 on Intel based mac I have a MacBook Pro with an Intel processor. I was able to successfully install and run the beta-0 release of GodMode, but later beta versions (beta-1 and 2) fail to start up on my machine. The smol-menubar releases(v0.0.17) seem to work without issue. It appears to be a problem specific to the main GodMode app in releases after beta-1. Steps to reproduce: Install GodMode beta-0 - runs successfully Install GodMode beta-1/2 - app fails to launch I've tried: Deleting previous GodMode installs before installing new version Installing GodMode-beta-1/2 after installing beta-0. Deleting previous menubar installs before installing new version I used these release files: https://github.com/smol-ai/GodMode/releases/download/v1.0.0-beta.2/GodMode.1.0.0-beta.2.dmg https://github.com/smol-ai/GodMode/releases/download/v1.0.0-beta.1/GodMode.1.0.0-beta.1.dmg Can confirm. When looking at the .app in the release dmg (GodMode.1.0.0-beta.2.dmg), it looks like the binary under the MacOS folder is for arm64. ❯ file GodMode GodMode: Mach-O 64-bit executable arm64 It could be this error in package.json under the package build target info for macOS: "arch": [ "arm64", "'universal", "x64" ] ("universal" has a single quote at the beginning) Same for me. 2019 Intel based Mac Update: just tried beta-3, still have the same issue even after the #162 fix. Same - still the same issue after trying with beta-3 Same for me,macOS Big Sur version 11.5.1 thanks - if anyone can knows Electron enough to figure out the build process let me know! Hey @anugrahsinghal (and all) can you try installing the universal build: https://github.com/smol-ai/GodMode/releases/download/v1.0.0-beta.3/GodMode-1.0.0-beta.3-universal.dmg I was just able to get this running on my 2017 MacBook Pro w/ Intel. LMK if this version works for you! @seanoliver Working on my 2019 Intel Mac! Hey @seanoliver Can confirm that the universal build works on my Intel Mac too. Hey @anugrahsinghal (and all) can you try installing the universal build: https://github.com/smol-ai/GodMode/releases/download/v1.0.0-beta.3/GodMode-1.0.0-beta.3-universal.dmg I was just able to get this running on my 2017 MacBook Pro w/ Intel. LMK if this version works for you! Hey @anugrahsinghal (and all) can you try installing the universal build: https://github.com/smol-ai/GodMode/releases/download/v1.0.0-beta.3/GodMode-1.0.0-beta.3-universal.dmg I was just able to get this running on my 2017 MacBook Pro w/ Intel. LMK if this version works for you! @seanoliver Thanks for the release. Would you be able to do the same for the newest version v1.0.0-beta.4? There's an annoying bug (#168) that I would like to get rid of. Cheers, man Can't run the universal build as the macbook complains of it not being verifiable.
GITHUB_ARCHIVE
Using sysfs has two advantages over using dmidecode: 1. It's faster; no need to start a process (actually two processes, sh -c 'dmidecode -s ...' and the dmidecode process itself) for each value we want to collect. 2. It makes it possible to collect DMI info on hosts without dmidecode, such as CoreOS hosts. When present (it's not present e.g. on RHEL 5 hosts), sysfs is at least as reliable as using dmidecode directly. However, the UUID is returned incorrectly on some VMWare VMs (those with hardware version 13) - the byte order is swapped around. Some such affected VMs show the same wrong UUID in sysfs and some show the correct UUID in sysfs and the wrong UUID only in dmidecode. The workaround for hosts showing the wrong uuid (byte order swapped) is to use an Awk script that parses raw dmidecode output (script given in the first link above). In this commit, the use of the awk script is not limited to affected VMWare VMs because (a) there is no clearcut and easy way to identify affected hosts and (b) the awk script works on all hosts anyway - it is strictly more reliable than "dmidecode -s system-uuid". Also, (c) the awk script even works on hosts that have an older version of dmidecode that doesn't support the "system-uuid" keyword The fallback to get the uuid from sysfs is still needed for hosts that have sysfs but not dmidecode (such as CoreOS). 1. Hosts without dmidecode (which formerly would not have dmi inventory at all) will now have dmi inventory taken from sysfs. (E.g. coreos hosts) 2. Hosts currently showing the wrong UUID or an error message in place of a UUID will now show the correct UUID via dmidecode and Awk. (E.g. VMWare 13 VMs, or RHEL 5 hosts with dmidecode-2.7-1.28.2.el5) 3. Other hosts will start pulling their dmi inventory from sysfs instead of from "dmidecode -s ...", except UUID which they will pull from "dmidecode -u -t1 | awk ...". The values will all remain the same. Performancewise, pulling from sysfs will be strictly faster than pulling from "dmidecode -s ...", which should offset the negligible impact from using the awk script for collection of one value (uuid). (I also cleaned up the comment list of "other values you may want to collect" since system-uuid has been collected by default since
OPCFW_CODE
on 8 September 2012 The reviews already published about this book are pretty spot on. Learn C++ and this book will help you understand how to start programming a game in a good and structured manner while explaining many of the functions and procedures on the way. You will learn 3D as well but not to such a large degree. I enjoyed reading this book, it filled in a few blanks that I had about Windows and the functions etc in the DirectX library that greatly help in programming a game. Good reading all. on 6 December 2012 The content of this book is spot on and exactly what I was looking for ( a dx9 engine project with explanations ). However I myself am studying computer science with an emphasize on video game development and thus am quite familiar with C++ in general. This book is absolutely horrendous for starters, not only does he dump complete lumps of 200 lines right in front of your face "here add this to your solution", he also forces you to use a lot of conventions that HE finds useful, completely denying the fact that you yourself should discover which conventions YOU find useful. Not to mention that he comes across as a cocky bastard. So if you're looking for that little guidance that you need to get started with dx9, take it. If you're looking to get started with C++ I'd recommend reading Beginning C++ through game programming 3rd edition (same series) first, I also have that book and its so good at helping you with the basics (and is not written by a loony). on 17 June 2010 This is not a book for the non accustomed to DirectX Programming. To put it better, it does not get down into the nitty gritty details about how the engine gets up and running. In certain chapters the source code of the engine is not presented in it`s entirety. You have to copy the code from the cd to get it working. What it does perfectly good though, is that it gets you programming games rather fast. You don`t have to understand everything about DirectX to get the most out of the engine. Just take the engine, add certain necessary functions and start coding games. Buy another book though if you wish to understand how to build an engine from the ground up. on 2 October 2011 This book is great for showing you what goes into a 2D Game Engine, for an engine that comes with a book this is quite good..there are better - dont get me wrong - But they are infinetly more complex to understand and to use..+ they are 3D, this is 2D. One thing that should be said now, and I see one other reviewer has stated this - This book "DOESN'T USE OPENGL" - Its DirectX9c based. You get up and going fairly quickly in this book, so a good knowledge of DirectX would be a great advantage, you can still learn quiet a bit with only a little knowledge of DirectX. The language this book uses is C++ so a good understanding of that is essential..people who know C well shouldnt find it too hard to follow. Excellent place to start in your quest to learning about Game Engines, highly reccomended.. The Authour also has a forum for any questions related to this or any other of his books... on 1 February 2009 Yes as not stated in the title or the description here, this book is for programming under the DirectX platform and Opengl users should look elsewhere. Unfortunately I fall into the latter and thus find this book redundant, as the code is not written in any way that could easily be converted.
OPCFW_CODE
Agile development processes help businesses release software much quicker than it would be possible if using classic design and development cycles such as those based on the waterfall model. Most web applications require an agile methodology because they need to be updated very often and very quickly to meet customer needs. However, many businesses struggle with including security in their software development life cycle. Because of that, application security testing may become a major bottleneck and completely undermine the efforts invested in agile practices. Let’s have a look at some secure SLDC best practices that will help your development teams become truly agile. Close the Team Gap Most businesses that develop their own web applications have separate software development teams and security teams. These teams cooperate through submitted issues and, possibly, with the help of project managers. However, there is rarely direct contact or cooperation. If security tests are performed outside of the agile cycle, in final deployment phases (e.g. staging), this also leads to tensions. Security teams may find blocker issues just before a planned release, which may cause unexpected delays and put a lot of pressure on developers to fix those issues quickly. Sometimes, this can even lead to hostility between the teams and jeopardize the entire project. The most important best practice for agile development methodologies is therefore to close the team gap between development teams and security teams. While the business needs a dedicated security team to develop security policies and handle security issues outside the SDLC, agile teams should also (ideally) include security experts. For example, in many businesses that follow Scrum methodologies, an agile team includes not only developers and business stakeholders but also technical writers and user experience analysts. Therefore, there is no reason why such teams should not include a security expert as well. Make Security Part of Code Quality Software developers are expected to know how to ensure source code quality. It is usual practice that every piece of code that is written or modified goes through code review by another developer. However, such code review usually does not include finding and eliminating security vulnerabilities. This is because neither the code creators nor the reviewers are properly trained to recognize such vulnerabilities and they don’t have automated tools to recognize them. Security awareness must begin here, with the initial process of code creation. An SQL injection vulnerability in code should be perceived as a fundamental error, no different to using a bubble sort algorithm when quick sort is available. Therefore, best practice is to have every developer in the company tested for their knowledge of security vulnerabilities that may be introduced into code. If they are found to be lacking in this respect, they should receive specialist training to teach them how to recognize and avoid potential vulnerabilities as well as fix them in other people’s code. Recruitment processes that involve a test of the developer’s abilities to write quality software should also test knowledge of security vulnerabilities. No Application Left Behind Large organizations often handle thousands of different development projects at the same time. These large projects are carried out by different organizational units, frequently in different countries. In such environments, it is common that DevOps teams work independently and there is little room for a unified SDLC model. This leads to some teams adopting secure practices for the entire software development life cycle while others release their applications with no security testing at all, leaving some applications severely vulnerable. Even if an organization wants to globally unify the entire process of application development, testing, and deployment, it is often hindered by the differences in approaches taken by particular teams. For example, applications may be written using completely different languages, frameworks, and IDEs, and managed using different CI/CD and issue tracking tools. One team in one country may be developing applications in Java using Jira and Jenkins while another might be programming in PHP and using GitHub for both issue tracking and CI/CD. In such conditions, introducing common, automated security testing into the testing phase of the SDLC may be very difficult. Not only does this require several different approaches, but if the organization wants to use SAST and SCA, it may simply be impossible to use the same tools for every team. This, in turn, requires separate management workflows, introducing even more potential problems. The only viable solution in such cases is to use dynamic application security testing (DAST), which is language-independent and easy to integrate into CI/CD pipelines. This is why we would recommend that you start your secure SDLC efforts from DAST, not SAST or SCA. And when choosing your DAST solution, be sure to pick one that is designed to be included in the SDLC – like Acunetix. DOWNLOAD FEATURED E-BOOK Including Web Application Security in an Agile SDLC Download this e-book to learn how a medium-sized business managed to successfully include web security testing in their SDLC processes. Your information will be kept private DOWNLOAD FEATURED E-BOOK
OPCFW_CODE
Office 365 Compliance Framework and Microsoft Kaizala Office 365 compliance framework document describes the compliance classification levels with required controls for various Microsoft online services. Microsoft Kaizala is following the same compliance framework to manage & operate the service as well as to handle customer data. Presently, Microsoft Kaizala is certified for compliance category A by internal Microsoft Office 365 compliance team, which is responsible for managing this framework. This essentially means that Microsoft Kaizala has strong privacy and security commitments with promise of - - No mining of customer data for advertising - No voluntary disclosure to law enforcement agencies While there is very minimal human intervention to keep the service running, all the engineers who work on the product are required to undergo the security and privacy awareness training. Microsoft also ensures that all personnel certify acceptance of responsibilities for privacy requirements. Kaizala Compliance features for customers Microsoft Kaizala services and data are hosted on local Microsoft Azure data centers for Indian customers. All the messages, attachments, and Actions shared on Kaizala groups for Indian mobile numbers are stored only in the data centers located in India. Kaizala also provides capabilities that helps customers to meet their own compliance requirements. Following are top compliance related features currently available in the product: 1. View and manage all Kaizala users with data access Kaizala maintains an organization specific Kaizala User List (KUL), which is like a phone-based directory for all of its Kaizala users, for its administrators for central management. Any user who becomes a member of organization group in Kaizala, automatically becomes member of KUL. This means that it is a list of all Kaizala users who have potential access to organizations data i.e. all the member of its organization groups. Admins can associate additional custom attributes specific to their organization such as Aadhar No, Location, Designation, etc. for easier identification. It is also possible to delete a user from KUL, which automatically revokes the group memberships for the user. 2. Remove a user from all organization groups Kaizala management portal offers advance user and group management capabilities, which makes it easier for administrators to onboard and exit employees and partners. By searching for a user’s phone number, portal lists all the groups that a user is member of. Administrator may choose to remove a user from some or all of the groups in one go. 3. Wipe out data from client device When a user leaves or is removed from an organization group, Kaizala automatically clears all messages, Kaizala Actions and attachments from the client device. This is a unique feature in Kaizala which makes it possible for organizations to control users from stealing the organization data and is especially useful in hostile employee or partner termination scenarios. Kaizala also provides secure and open REST APIs to programmatically handle such scenarios in extended business flows from external systems. We will continue to build additional security and compliance capabilities into the product based on feedback from our customers.
OPCFW_CODE
Angular 14 migration: loading issues with zone.js I tried to migrate to Angular 14. Initially I had problems with webpack. It did not found the exposes modules. Updating webpack to the latest (5.74.0 at the time of the writing) resolved the issue. The errors were: cannot find module '@angular/platform-browser' cannot find module 'exposed module' After upgrading the webpack version I get the following errors that I can not resolve: Without MFE: When I try to run the application without MFE just using the index.html: Uncaught Error: Shared module is not available for eager consumption: 10627 at __webpack_require__.m.<computed> (consumes:428:1) at __webpack_require__ (bootstrap:19:1) at 62517 (polyfills.js:10:65) at __webpack_require__ (bootstrap:19:1) at startup:5:1 The 10627 module is the zone.js. Uncaught SyntaxError: Cannot use 'import.meta' outside a module (at styles.js:8267:29) Error: NG0908: In this configuration Angular requires Zone.js at new NgZone (core.mjs:26033:19) at getNgZone (core.mjs:27032:75) at PlatformRef.bootstrapModuleFactory (core.mjs:26899:24) at core.mjs:26955:41 With MFE: Uncaught Error: Uncaught (in promise): Error: NG0203: inject() must be called from an injection context such as a constructor, a factory function, a field initializer, or a function used with `EnvironmentInjector#runInContext`. Find more at https://angular.io/errors/NG0203 Error: NG0203: inject() must be called from an injection context such as a constructor, a factory function, a field initializer, or a function used with `EnvironmentInjector#runInContext`. Find more at https://angular.io/errors/NG0203 at injectInjectorOnly (core.mjs:4775:15) at Module.ɵɵinject (core.mjs:4786:12) at Object.StoreFeatureModule_Factory [as useFactory] (ngrx-store.mjs:1407:20) at Object.factory (core.mjs:6974:38) at R3Injector.hydrate (core.mjs:6887:35) at R3Injector.get (core.mjs:6775:33) at injectInjectorOnly (core.mjs:4782:33) at ɵɵinject (core.mjs:4786:12) at useValue (core.mjs:6567:65) at R3Injector.resolveInjectorInitializers (core.mjs:6824:17) at resolvePromise (zone.js:1213:31) at resolvePromise (zone.js:1167:17) at zone.js:1279:17 at ZoneDelegate.invokeTask (zone.js:406:31) at Object.onInvokeTask (core.mjs:26218:33) at ZoneDelegate.invokeTask (zone.js:405:60) at Zone.runTask (zone.js:178:47) at drainMicroTaskQueue (zone.js:582:35) I use ngrx store here. Configurations: Host: module.exports = withModuleFederationPlugin({ shared: { ...shareAll({ singleton: true, strictVersion: true, requiredVersion: 'auto', }), }, }); MFE: module.exports = withModuleFederationPlugin({ name: 'progress', filename: 'progressRemoteEntry.js', exposes: { './Module': 'apps/asd/src/app/remote-entry/entry.module.ts', }, shared: { ...shareAll( { singleton: true, strictVersion: true, requiredVersion: 'auto', }, ['rxjs'] ), rxjs: { singleton: true, strictVersion: true, requiredVersion: '*', }, }, }); I use imports like: import { filter } from 'rxjs'; instead of rxjs/operators that's why I excluded it. If you need any more info, please let me know. I tried my best with debugging into webpack. Tell me if you have any other ideas. I forgot to add that without the custom webpack config the application compiles and runs. I get the same errors for legacy Version 12 configuration with 'text/javascipt' without ES modules. Finally I used the share function from NX. It has limitation like filename etc. But it works.
GITHUB_ARCHIVE
You can use it to capture those momentary flashes of ideas that you would otherwise wish to recall later and then synchronize them across your various devices. Kingsoft Office Suite Free 2012 is an alternative to Microsoft Office that offers three flagship tools for free: a text editor, a spreadsheet editor, and power point presentation creator. It provides efficient and reliable information organization options, offering you the possibility to create your own work style. For custom tags, we're looking at mid January for a 100% availability on OneNote for Windows 10. By the availability of the network, you will be able to share the notes with the other users. I had this login problem too after installing OneNote 2016. You don't need a 2016 key to install 2016, as far as I know? Yes, for me the standard download page still worked in Germany. Start on your laptop then update notes on your phone. This makes you able to collaborate among the users in the notebook in the offline mode. Access via the links in other current answers doesn't work for me. Live Embedded Office files will be a game changer I strongly believe! Regards, Emi Zhang TechNet Community Support Please remember to mark the replies as answers if they help and unmark them if they provide no help. Is there anywhere a 32 bit installer for OneNote 2016 Desktop? Microsoft has hidden the download link somewhat recently, which could mean, that this version will not be there forever. Use professional-looking effects for text, shapes, and pictures, including softer shadows, reflections, and OpenType features such as ligatures and style alternatives Microsoft Office Home and Student is a worthy upgrade for businesses and individual users who need professional-level productivity apps, but it will take some time to get acclimated with the reworked interface. I swear I tried both x86 and x32 and neither worked. OneNote seamlessly integrates, as expected, with utilities included in the Office suite, as well as other applications. Get OneNote for any of your devices or use it on the web. If I go to www. The only place I could find at Microsoft was and. Open your email, calendar, contacts, and tasks fast. Capture virtually any type of information and share it easily. If Microsoft have pulled all references to OneNote Desktop from their sites it doesn't bode too well for the future. Now, with push-based email, appointments, and contacts from Outlook. Who knows which is which? But as for now you still find it here: Ignore the big message which says that you already have OneNote installed. Using the navigation bar speeds up communication. Whenever we created a notebook, we needed to choose whether we wanted to save it on our computer, local network or the Web. Best Lucrezia Do you mean the settings in the screen capture do not work for you? Get things done with your friends, family, classmates and colleagues. As for tags, tags search is out to 50% of users right now with plans to ramp up to 100% in the next few days. A fair Feature benchmark is available here : bit. Thanks Bernd but it appears you have to have a work or school account to use this? The environment is not fancy and focuses exclusively on providing flexibility and 1. Tools like Access, Accounting Express, Publisher and Outlook are 2010 Visual Studio Code is a reduced version of the official Microsoft development environment focused exclusively on the code editor. Stay on the same page and in sync wherever you are. However the option to download it from the Onenote website, which is where i've previously downloaded it from, appears to have been removed so it's possible Microsoft now expect you either pay for a copy of Office or use the cutdown store version. And because all your notes are in one place, find what you need with just a few clicks anywhere--at your desk, in meetings, or on the road. Use OneNote at home, school and work to capture thoughts, ideas and to-do's. You can also download these from Softpedia, as well as an Office iso from there although I've not tried that. It is also highly compatible with Microsoft Office suite and can view and handle documents create within it. Multi-User Capability Featuring multi-user capability, this software permits editing for offline paragraph-level with the next synchronization and merging. Sure, you can tag things. Are you continuing using OneNote 2016 then? There is indeed a difference: How can I install OneNote 2016? The interface uses tabs to make it easy to arrange different projects and open multiple pages. You must register to OneDrive , where you will find the backup. It's past regular support date, but that only means you don't get any new features anymore, just patches. Notebooks Saving Information Microsoft OneNote is also enriched with the notebooks saving information where you can save the information in the pages organized into some sections in the notebooks. In the past time, Microsoft OneNote was a part of Microsoft Office. Microsoft OneNote 2016 Features 1. In this case, the screenshot text must be searchable. Start creating in OneNote with an Office 365 subscription. Bernd Thanks i'll give it another go and look out for that link I noticed when you shared the download link you gave me the direct link rather than to a web page containing the link. Like a binder, it lets you organize your information into sections. Share your notebooks with others for viewing or editing. Please feel free to message the moderators with feedback or concerns.
OPCFW_CODE
MTV Networks Company Overview and Responsibilities The Multiplatform Engineering team, responsible for building and supporting Viacom's public facing, award winning mobile apps and Web sites around the world for leading brands in popular culture such as MTV, Nickelodeon, Comedy Central, and BET, is looking for smart, creative people in the beginning of their careers as software engineers. If you love to solve problems creatively, can work well in a team environment, are interested in building the best software, and want to always be learning, then we want to hear from you. You also understand data structures and algorithms, can explain the code you write, are open to feedback from others, and know how to give constructive feedback to team members. This is a staff position reporting into one of the Multiplatform Engineering Guilds depending on career direction: Mobile App Engineering, Back-end Engineering, or Web Front-end Engineering. Create solutions with a team of engineers developing fast, stable, and reliable apps, Web sites, and/or services Write well tested, readable code Work with technical and non-technical staff to translate business requirements into technical requirements Participate in design and code reviews with other developers, giving and taking feedback Act as part of the third level support team, working with first and second level support to resolve problems and perform root cause analysis Be flexible and willing to learn both independently and with other team members Learn a large existing code base and add new features to it B.S. or M.S. in Computer Science or related technical field (or equivalent) preferred You participate and have had success in hackathons Passion for solving business problems with automation You contribute to open source Some experience with video/image processing Some experience developing games and game frameworks Demonstrated interest in machine learning and natural language processing Demonstrated interest in exploring data and creating visualizations Experience required with at least one database system such as MySQL, Postgres,SQLite, MongoDB, DynamoDB, Redis Experience desired developing with social apis (Facebook, Twitter, Instagram) and OAuth Experience desired with full stack web development (HTML5, Mobile First, Responsive, REST) Working knowledge of network protocols like TCP/IP, HTTP, and HTTPS Knowledge of Continuous Integration and Test Driven Development Understanding of source control systems like Git and SVN MTV Networks Company Website : http://www.mtv.com This is the official copyright compliance policy ("Copyright Compliance Policy") for MTV.com ("Site," "we," "us," or "our"), an Internet website offered in cooperation or connection with the MTV television channel or programming service ("MTV Channel") and this Copyright Compliance Policy applies regardless of what type of Device you use to access the Site. The MTV Channel and the Site (together, "MTV") are provided by Viacom Media Networks ("VMN"), a division of Viacom International Inc. (collectively, the "Parent Companies"). This Copyright Compliance Policy sets forth the procedures undertaken by MTV to respond to notices of alleged copyright infringement from copyright owners and terminating the accounts of repeat infringers and does not cover any other procedures, for any other purpose, or the procedures of the Parent Companies or any subsidiaries and affiliates of the Parent Companies (collectively, "Affiliates"), or any other company, unless specifically stated.
OPCFW_CODE
About GitHub CLI A few months ago the GitHub CLI had their first 1.0 release, you can read more about it on their announcement blog. It promises to make working with GitHub easier. I challenge you to find someone that didn’t mess up their first git push. Anyway, I had been meaning to try it out when it was released but only got around to it recently. I’ve written down my notes. As expected on my Mac … brew install gh One time set up was easy enough. $ gh auth login ? What account do you want to log into? GitHub.com - Logging into github.com ? How would you like to authenticate? Login with a web browser ! First copy your one-time code: 13D8-ABCD - Press Enter to open github.com in your browser... ✓ Authentication complete. Press Enter to continue... ? Choose default git protocol SSH - gh config set -h github.com git_protocol ssh ✓ Configured git protocol ✓ Logged in as stevemar Type in the auth code We’re all set! Using the CLI Clone a repo Simply run the command below. Note, that I did need to delete the existing repo I had cloned locally first, then reclone it. gh repo clone IBM/helm101 Create a new repo Another one liner with lots of options. $ gh repo create stevemar/repo-test ? Visibility Public ? This will create 'stevemar/repo-test' in your current directory. Continue? Yes ✓ Created repository stevemar/repo-test on GitHub ? Create a local project directory for stevemar/repo-test? Yes Initialized empty Git repository in /Users/stevemar/workspace/temp/repo-test/repo-test/.git/ ✓ Initialized repository in './repo-test/' $ cd repo-test $ ll drwxr-xr-x 3 stevemar staff 96B 11 Jan 13:49 . drwxr-xr-x 4 stevemar staff 128B 11 Jan 13:49 .. drwxr-xr-x 9 stevemar staff 288B 11 Jan 13:49 .git Yay, new repo Creating an issue This was fun, much like a git commit message, it popped open my editor. $ gh issue create Creating issue in stevemar/repo-test ? Title Create content for the repo ? Body <Received> ? What's next? Submit https://github.com/stevemar/repo-test/issues/1 The last test I did … $ gh issue list Showing 1 of 1 open issue in stevemar/repo-test #1 Create content for the repo about 1 minute ago You can tell that a lot of love and effort went into this CLI. It’s pretty damn good.
OPCFW_CODE
Mining Critical Events in Longitudinal Data: Challenges and Opportunities - Speaker: Prof. Chandan Reddy - Wayne State University, Dept. of Computer Science - Date: Tuesday, Feb. 16, 2016 - Time: 1:00pm - 2:00pm - Location: Room 214 (NVC) Due to the advancements in recent data acquisition and storage technologies, various disciplines have attained the ability to not only accumulate a wide variety of data but also to monitor observations over longer time periods. In many real-world applications, the primary objective of monitoring these observations is to be able to better understand and estimate the time point at which a particular event of interest will occur in the future. One of the major difficulties in handling such longitudinal data is that the data is usually censored, i.e., it is often incomplete since some of the instances will either become unobservable or no event occurs during the monitoring duration. Due to this censored nature, standard statistical and machine learning based predictive algorithms cannot readily be applied to analyze the data. In addition to the presence of censoring, such longitudinal event data poses unique challenges to the field of predictive analytics and thus creates opportunities to develop new algorithms. For example, in many practical scenarios, the censored data challenges are compounded by several other closely related complexities such as the presence of correlations within the data, high dimensionality of the data, temporal dependencies across multiple time points, lack of available information from a single source, and difficulty in acquiring sufficient event data in a reasonable amount of time. In this talk, I will describe new computational algorithms that can address these challenges and effectively capture the underlying predictive patterns in longitudinal data by directly estimating the probability of event occurrence. The performance of these new models for mining critical events will be demonstrated on important problems such as forecasting patient risk in healthcare, project success prediction in crowdfunding, and cancer survival estimation in bioinformatics. Finally, some of the ongoing research works in our lab related to the student retention problem and crime data analysis will also be discussed. Chandan Reddy is an Associate Professor in the Department of Computer Science at Wayne State University. He received his Ph.D. from Cornell University and M.S. from Michigan State University. He is the Director of the Data Mining and Knowledge Discovery (DMKD) Laboratory and a scientific member of Karmanos Cancer Institute. His primary research interests are Data Mining and Machine Learning with applications to Healthcare Analytics, Social Network Analysis and Bioinformatics. His research is funded by the National Science Foundation, the National Institutes of Health, the Department of Transportation, and the Susan G. Komen for the Cure Foundation. He has published over 60 peer-reviewed articles in leading conferences and journals including SIGKDD, WSDM, ICDM, SDM, CIKM, TKDE, DMKD, TVCG, and PAMI. He received the Best Application Paper Award in ACM SIGKDD conference in 2010, and was a finalist of the INFORMS Franz Edelman Award Competition in 2011. He is a senior member of the IEEE and life member of the ACM.
OPCFW_CODE
#!/usr/bin/env python3 import sys import math import matplotlib.pyplot as plt from TankCircuit import TankCircuit #this class takes a TankCircuit object, the number of points you want plotted, and start and end frequencies #then it plots both the matchCap vs frequency and tuneCap vs frequency #the plot function will automatically set your circuits w to the start frequency given. class FrequencyPlotter(): def __init__(self, circuit, n, startF, endF): self.n = n self.startF = startF self.endF = endF self.circuit = circuit def plot(self): step = (self.endF-self.startF)/self.n frequencies = [] matchCaps = [] tuneCaps = [] self.circuit.setW(self.startF) w = self.startF #fills 3 lists with values between the starting and ending frequencies with step of step for i in range((self.n) + 1): frequencies.append(w) matchCaps.append(self.circuit.matchCap()) tuneCaps.append(self.circuit.tuneCap()) w +=step self.circuit.setW(w) plt.plot(frequencies, matchCaps) plt.plot(frequencies, tuneCaps) plt.xlabel('Frequency') plt.ylabel('Capacitance') plt.show()
STACK_EDU
There are a lot of moving parts to any application system. One such moving part is the creation and dependence upon the use of linked servers inside of SQL Server. These linked servers give users the ability to write queries as if the data was local by referencing a four-part name. I’ve written before about the use of linked servers and the performance issues that may arise. Today I want to talk about something more fundamental about linked servers: connectivity. Creating a linked server is fairly straightforward, you can read the reference here. You have a handful of ways to handle authentication between the instances. These methods include using the security context for the current login, for the current user, or passing along remote credentials. The one you choose will depend on your needs and requirements. The specific method chosen isn’t important for today’s post. Today is more about the failure to communicate between servers. Connections between servers can fail for a variety of reasons. Permissions get changed, AD accounts get modified (or removed), passwords get reset. And sometimes the use of a linked server gets lost over time. It was not uncommon for me to migrate databases to a new server and find out weeks later that a linked server was needed. At some point in my career, I had been bitten enough times by linked servers failing to connect that I built a way to automate the checking of the linked server connections. I wrote about it here, and I even updated the script recently. And I would have put that script into GitHub by now if not for last February, while at SQL Konferenz in Darmstadt, Germany, I was struck with an idea. While having some post-event German beverages I was talking with William Durkin (blog | @sql_williamd) regarding the DBAtools.io project. This project is wonderful for migrating data between servers, or even an entire instance. I noticed that there was no cmdlet for testing a linked server connection. I asked “hey, do you think that might be something useful?” William said yes, and off I went to email Chrissy LeMaire (blog | @cl). A few emails later I found myself connecting to the dbatools.io GitHub repo and merging my cmdlet into the project. So that’s where my code now sits, for everyone to use. You could download my specific cmdlet easily, but what you should do is download all the DBAtools.io goodness. DBAtools.io is in the Microsoft Powershell gallery, so installing DBAtools as easy as running this command: And then you can run any of the commands easily. Ever want to safely remove a database? There’s a cmdlet for that: Remove-SqlDatabaseSafely. You can find a cmdlet for just about everything. And, if you don’t see one, you can contribute to the project and add the missing cmdlet to the project. For a while now I have been meaning to take all the scripts I’ve used over the years and get them loaded to my GitHub repo for everyone to use and modify as they see fit. I like the idea of contributing to this project instead. I’m not going to spend time trying to market and pimp my scripts at my own repo, it’s easier for me to share what I can over at dbatools.io. I’d rather contribute to the larger project there than have a bunch of scripts here. The dbatools.io project is awesome. I like it and I think you should, too. I’ve contributed and I think you should, too. Being a part of the dbatools.io team reminds me of what it was like when I was first starting out as a DBA and I exchanged ideas with a handful of folks I would meet at conferences. If you are just getting started in SQL Server administration, are looking for some tools, and want an easy way to learn some PowerShell, then dbatools.io is the place for you.
OPCFW_CODE
According to the rationales given in the OP and comments and sugestions made by the community, I propose the following near-term and mid-term plan to push Nubits adoption. 1. Prepare the Nu exchange integration guide Payment processors and exchange gateways do not only offer exchange interfaces but also provide liquidity (acting like LPCs), we should add a section telling things that are not known outside Nu-sphere: a) Nu allows payment processors and exchange gateways to apply for compensations fees from Nu shareholders at market-determined rates. The exchangers will need to post an LPC proposal for shareholders to vote. (a proposal template should be given: who, how much liquidity on NBT/USD, for how long, how often to report, promise to re-ballance or not at what frequency, how much fee is asked etc ) b) Whether to be an shareholder-approved LPC or not, in addition to buying Nubits from the market, exchangers can purchase Nubits from shareholders or Nu reserve fund custodians at $1/NBT. 2. Contact a list of selected payment processors and exchange gateways We send official emails/messages to promising payment processors and exchange gateways to introduce Nubits, invite them to integrate Nubits, and point them to the integration guide. There are hundreds payment processors and exchange gateways. It’s better we pick a small number to start with. Here are a list top e-currency issuers listed in Method 1 in the OP edit: add http://www.neteller.com Somehow I don’t expect many of them will pay a lot of attention to Nubits because there must be many people proposing all kind of ideas for these big players. So I have chosen from those smaller processors and gateways listed by bestexchange for buying BTC with PM-USD. I looked at each of them, and selected the ones according to the criteria: - Has English version web interface - Trades PM USD (annotated after the link-- PM:PerfectMoney OK:OKpay PY:Payeer WM:webmoney EG:EgoPay) and BTC - my webpage security alarm doesn’t go off The ones with a “*” means they are also chosen in the PM partners list (see bellow). The ones in this list generally have better made, more professional interfaces. If you have other lists that can help to pick out the more active, liquid, developed, and potentially friendly exchanges, feel free to suggest. edit: https://www.247exchange.com/ is asked in this post. 3. Help the processors and gateways to integrate I expect contacted processors and gateways to do most of the integration work. However to accomplish a solid step using Method 1 and 2 in the OP, the Nu team and community might need to give some help to the processors and exchanges, because they may have various levels of development and cryptocurrency expertise, to vote on LPC proposals, and to help with testing. Nu should treat the interested processors and gateways as potential partners. Once there are several processors and gateways that have Nubits integrated, we will contact price comparison and rating sites such as bestchange and OKchanger to suggest NBT be included. This will not only get more users to see NBT, but will help other exchanges to see the need to offer NBT, too. 4. Contact more potential partners There are about 200 entries on PerfectMoney certified partners list . I looked at the first 60 or so and picked out about half of them using the same criteria above. Have PM, PY, OK http://exchanger.org.ua/english/ (no automatic processing) * Has PM and OK http://www.zharifsofiaexchanger.com/site/home.php eg PPC http://www.nicciexchange.net/ wm eg https://www.velaex.com pm wm http://www.standardgoldng.com/ wm eg https://xzzx.biz/ wm * http://www.abijanexchange.com/ wm eg 5. Make Method 3 code available Once there are quite a few processors and gateways that have successfully integrated NBT/fiat, we will offer to acquire core code described in the OP, in exchange of bounty or nushares. The plan will get details once first several steps are done. Please comment, especially from the Nu team. I see strategic importance in getting this done. I will put this forward as a motion if needed.
OPCFW_CODE
----- snip ----- re: PSXMemTool problems with multi-slot saves I've experimented a bit with this problem myself tonight, and I have to agree with you that PSXMemTool is completely unsuitable for working with multi-slot saves. That is only because ".GME" is a full memcard file type. You get the same result with any other full card type, like ".mcr" or similar. The problem appears to be that Simon didn't implement routines to reassign slot positions and recreate directory headers for non-first slots of a save.Quote: Originally Posted by alkarl Let's clarify that a little: In the case of a three-slot save, for example, the header from the ".mcs" file must be modified to indicate which slot is to be the second of this save (depends on what is free at import time), before that header can be placed in the directory section. Then a middle-slot header must be created for that 2nd slot, and it must in turn indicate which the third slot is, and that must be given a final-slot header in the directory section. And of course, for a four-slot save there would be two middle-slot headers instead of just one, and so on for larger saves. Actually I think that Simon Mallion has dropped development of this tool. Version 1.19b has been around for quite a while now, and I know that he is aware of the lacking support for multi-slot saves, because if you try to export a multislot save you get an error message stating that this simply isn't supported for multi-slot saves yet.Quote: if somoene can confirm that and maybe report the bug. PSXRC is not at fault, can't fix the problem. the most unbelievable thing is how can that bug hasn't been reported yet. :mad: But the behaviour for import of such saves is indeed a bug, as the program clearly ignores the save size field in the header structure. It should at least have detected the size and refused the import, if he couldn't implement it properly at that time. To change the subject to something brighter, I was wondering if you'd consider making a commandline version of your conversion tool, simply accepting a normal commandline argument for the name of the file to be converted. My point in asking for this is that it would then be very easy to write a small BAT file which could then be used as target for mouse drag-drop of files. So I could simply pick up a file from a window with the mouse, and by dropping that on the BAT file (or its shortcut), it would invoke your conversion tool with the correct work directory and filename, without me having to type anything by hand. I use this method a lot to make use of command line tools more convenient, and I'm sure it could work well for your tool as well. It's sort of like a "poor man's GUI", and without requiring any deep Windows-specific coding ;) Best regards: dlanor
OPCFW_CODE
Converting JPG image to PDF without resizing image with ghostscript or itext I'm trying to merge images in JPG format with standard PDF documents, while keeping images in the same size. Earlier I was using ImageMagick's convert, but it results in huge quality drop since it converts everything into images, so I'm switching to ghostscript (or eventually itextpdf). I found this code which inserts scaled image into A4 page: gs \ -sDEVICE=pdfwrite \ -o foo.pdf \ /usr/local/share/ghostscript/8.71/lib/viewjpeg.ps \ -c \(my.jpg\) viewJPEG PdfWriter from itextpdf in this way or that way could be alternative but it also adds an image into a page. After inspecting ImageMagick's behavioral, I found out command it was using which I think is closest to my solution, but it doesn't seem to work when I'm trying to modify or use it. How should I modify it? gs -q -dQUIET -dSAFER -dBATCH -dNOPAUSE -dNOPROMPT -dMaxBitmap=500000000 -dAlignToPixels=0 -dGridFitTT=2 -sDEVICE=pngalpha -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 -sOutputFile=out_gs.pdf fox_big.jpg That's strange: when you talk about iText, you refer to articles on two obscure web sites that aren't related to iText in any way. I'd expect you to link to articles on the official iText web site. Is there a reason why you aren't using the iText web site when working with iText? I have the impression that there is an entire ecosystem of bad tutorials, usually obsolete code copy/pasted together, with the only intention of generating pageviews... I've found first link on another SO question related to itext. When searching "itext jpg pdf" on google I've found second link as first result when your page was 9th. I promise to do better resarch before asking a question in future. Just in passing, the reason the stated command doesn't work is because it doesn't use viewjpeg.ps and so can't read a JPEG file directly. You need to include the "/usr/local/....../viewjpeg.ps -c (fox_big.jpg) viewJPEG" Of course the remainder of that command produces a PNG but perhaps that's what you expected. And if you are going to use Ghostscript, better to use something a little newer, 8.71 is (IIRC) 6 years old. The answer to your question can be found on the official iText web site: How to add multiple images into a single PDF? In the MultipleImages example, we take a selection of images, and we convert them to a PDF: multiple_images.pdf The page size is set to match the size of the image: For the first image: Image img = Image.getInstance(IMAGES[0]); Document document = new Document(img); As you can see, we pass the img to the Document constructor. This works, because Image extends the Rectangle class. For the subsequent images, we change the page size: document.setPageSize(img); Note that we also need to set the absolute position: img.setAbsolutePosition(0, 0); Please go to the official web site when you want to find info and examples on iText. I've spent many months writing all that content and putting it on the web site. It's frustrating when I see that people don't take advantage of all that work. (It feels like I wasted my time.)
STACK_EXCHANGE
Voices: Do companies take college student app developers seriously? In this day and age, where technology changes at lighting speed, young developers and entrepreneurs seem to be everywhere. According to a study done by U.S. Department of Labor, the employment of computer and information research scientists is likely to increase by 15% by 2022. Students at the University of Illinois at Urbana-Champaign (UIUC) have discovered their own means for practical experience outside of the classroom in the form on an they created app based on their experiences in college. Krishna Mittal, a freshman at UIUC, is currently developing an app called Shaked, which enables users to exchange contact information in a simpler, more convenient way. Mittal was inspired to create the app after repeatedly finding networking with others to be cumbersome. Neil Nijhawan, a junior at UIUC, co-founded an app called Shortnotice, which uses face-to-face reactions as a driver for making plans. Aisha Davis, technical account manager at Microsoft, says her computer science journey started in high school when she attended DigiGirlz, a Microsoft YouthSpark program designed to assist high school girls who are interested in the field. Davis says she was able to learn HTML and incorporate the WeatherBug app on a website she created. Not only did she learn technological skills, but she also met her mentor, Lindsay Lindstrom, who has continually helped her. When she attended college at Johnson C. Smith University in North Carolina, Davis she took part in Imagine Cup, a global competition for young technologists held by Microsoft. She developed a mobile weight loss application and made it to the first round. Davis says she mentors classmates at her university just like Lindstrom mentored her. Reflecting on her many experiences, Davis says her involvement in application development was merely one of several factors that helped her obtain a job at Microsoft -- and that her passion for computer science really helped pave the path to her success. “I wasn’t a 4.0 student, I just had a dream, I had a passion and I had a mentor,” Davis says. “People say dreams don’t come true. Microsoft was my dream company and I was your average Joe and I’m here now and I’m living my dream everyday.” When recruiting individuals to work for Microsoft, Mike Scott, Senior Program Manager of Microsoft Academy of College Hires, says that the evaluation process consists of looking at the person as a whole, which includes their college degree, passion for technology, involvement in internships, engagement outside of the classroom and the obstacles they overcame. Although app development can add value to an applicant’s resume and differentiate you from other individuals, Scott says they really look for the underlying competencies associated when creating the app, which include innovation, problem solving, customer focus, efficiency of code and passion for technology. “We look for individuals who are committed to being lifetime learners and agile enough to look for new ways of doing things and keeping up with the changes in the industry,” says Scott. Additionally, Scott says networking, finding internships, joining organizations, utilizing your career center and working on projects on the side are several ways students can get a leg up in their career to get noticed by large corporations like Microsoft. While most students may have gone the typical route of schooling, there are the occasional students who take a more unorthodox path. Some of the most famous developers and entrepreneurs in the computer industry -- Mark Zuckerberg, Steve Jobs and Bill Gates among them -- dropped out of school to work on their ideas. Ari Weinstein, creator of Workflow and DeskConnect, dropped out of Massachusetts Institute of Technology (MIT) his freshman year to take part in the Thiel Fellowship, which 20 fellows are granted $100,000 focus more on their work and research. During his time at MIT, Weinstein says he was very conflicted because although he learned a lot from his classes, he was spending a lot of time working on his apps. He says he loves the idea of creating an app that helps people get things done more efficiently and so the fellowship granted him the opportunity to further develop his software and start his own company. In January 2014, Weinstein attended a Hackathon event at the University of Michigan and won first place for his idea of the Workflow app, which serves as a personal automation tool that gives users the option to create their own workflows and apps. The app was launched in December. “Certainly in my experience…the skillsets are more important than the degree. I was lucky enough to be offered jobs having never gone to college or only done a year in college,” Weinstein says. “I believe that companies do take college student developers seriously.” Walbert Castillo is a student at University of Illinois at Champaign and a spring 2015 USA TODAY Collegiate Correspondent. This story originally appeared on the USA TODAY College blog, a news source produced for college students by student journalists. The blog closed in September of 2017.
OPCFW_CODE
// WARNING: This file is auto-generated and any changes to it will be overwritten import lang.stride.*; import greenfoot.*; /** * */ public class CrabWorld extends World { /* (Actor, World, Greenfoot, GreenfootImage)*/ /** * Create the crab world (the beach). Our world has a size of 560x560 cells, where every cell is just 1 pixel. */ public CrabWorld() { super(560, 560, 1); prepare(); addObject( new Fly(), 1, 2); for (int i = 0;i <= 0;i++) { int x = Greenfoot.getRandomNumber(getWidth() - 1); int y = Greenfoot.getRandomNumber(getHeight() - 1); addObject( new Fly(), x, y); } } /** * Prepare the world for the start of the program. That is: create the initial objects and add them to the world. */ private void prepare() { Crab Crab = new Crab(); addObject(Crab, 266, 536); End end = new End(); addObject(end, 282, 47); Crab.setLocation(283, 532); Fly crab = new Fly(); addObject(crab, 59, 215); Fly crab2 = new Fly(); addObject(crab2, 498, 399); Skull cactus = new Skull(); addObject(cactus, 78, 469); Skull cactus2 = new Skull(); addObject(cactus2, 113, 84); Skull cactus3 = new Skull(); addObject(cactus3, 352, 47); cactus2.setLocation(43, 59); cactus3.setLocation(522, 58); cactus.setLocation(36, 523); Skull cactus4 = new Skull(); addObject(cactus4, 516, 513); cactus3.setLocation(524, 40); cactus2.setLocation(47, 69); cactus.setLocation(35, 532); cactus4.setLocation(515, 519); cactus2.setLocation(44, 78); cactus2.setLocation(39, 53); cactus.setLocation(40, 504); cactus4.setLocation(516, 516); crab2.setLocation(458, 285); crab2.setLocation(423, 356); removeObject(crab2); crab.setLocation(52, 303); Star straw = new Star(); addObject(straw, Greenfoot.getRandomNumber(560), Greenfoot.getRandomNumber(560)); Star straw2 = new Star(); addObject(straw2, Greenfoot.getRandomNumber(560), Greenfoot.getRandomNumber(560)); Star straw3 = new Star(); addObject(straw3, Greenfoot.getRandomNumber(560), Greenfoot.getRandomNumber(560)); addObject(straw3, 501, 194); } }
STACK_EDU
I have a strange problem, and I haven't seen it anywhere else on the web. I have a basic flash document with a large number of movie clips activated with many individual mouse events. The idea is, the user has certain text information displayed whenever they mouse over a certain element. Now, the problem presents itself after the user "mouses over" a good number of the elements. Some of the text appears to have been "painted over" or erased. Here is an example: There are about 160 more of these such movie clips, but not every one of them has this happen. I have compared them and there is nothing different between problem movie clips and normal movie clips. This is a piece of the actionscript I have, I won't paste it all in here...it's really long: chadron.buttonMode = true; haysprings.buttonMode = true; hemingford.buttonMode = true; Thank you for any help. I need it. This is the function I have set up for the mouse events: The movieclip is set up to fade in when the target movieclip is "moused" over, and fade out when moused out of. Here is what my timeline looks like inside of each individual target movie clip element: The "header" layer is what contains the text (which is also a movieclip) being faded in and out. Here is a link to it on the web too incase you would like to see it in action. It won't start having problems until you mouse over quite a few of them, so be patient and it will eventually show you what I'm talking aobut. I added that line of code and it gives me this error: |animation, Layer 'Actions', Frame 1, Line 660||1118: Implicit coercion of a value with static type Object to a possibly unrelated type flash.display:DisplayObject.| What that does is place the hovered object above anything else on the playing field. So the problem you were/are having is likely some remnants of hovered pieces that do not manage to transition out fully are left behind and start blocking content. Some are not sitting below others which would account for some not being affected I see, now that I understand what it does, this has presented another slight issue for me. If you will visit the link I posted earlier (http://administrators.toddbecker.org/new/ne.html) you will notice that after mousing over the different points on the map, the white space below becomes a "mouseover area", causing some of the text elements to appear and disappear when you move your mouse around below the map. I'm sure there is a proper way to deal with this. Before, I had simply made a transparent box into a movieclip and covered the bottom portion of the page with it, solving this problem. But this may be what caused the initial problem. Any insight would be appreciated. I apologize if I should have posted a new thread for this second issue. Yeah, I'd be lying if I said I didn't see that coming up. Had you not already made so many of them I would suggest you take a different approach and use code to manage the transitions instead of the timeline animations. But here is an option that might work... create an invisible movieclip (alpha = 0) that covers the area below, and each time you addChild(MovieClip(e.currentTarget)), also addChild() that invisible movieclip so that it blocks access to anything beneath it. If it turns out that you need to be able to interact with what is displayed below that, just switch the order of the addChild()'s so that the invisible movieclip is one below it (and addChild it again for the rollout to avoid the same problem you have now)
OPCFW_CODE
// Copyright (c) 2013, Webit Team. All Rights Reserved. package webit.script.util; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import webit.script.Context; import webit.script.core.ast.Expression; import webit.script.core.ast.Optimizable; import webit.script.core.ast.ResetableValueExpression; import webit.script.core.ast.Statment; import webit.script.core.ast.loop.LoopCtrl; import webit.script.core.ast.loop.LoopInfo; import webit.script.core.ast.loop.Loopable; import webit.script.exceptions.ParseException; import webit.script.io.Out; /** * * @author Zqq */ public class StatmentUtil { @SuppressWarnings("deprecation") public static Object execute(final Expression expression, final Context context, final Out out) { try { context.pushOut(out); Object result = expression.execute(context); context.popOut(); return result; } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, expression); } } @SuppressWarnings("deprecation") public static void execute(final Statment statment, final Context context, final Out out) { try { context.pushOut(out); statment.execute(context); context.popOut(); return; } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, statment); } } public static Object execute(final Expression expression, final Context context) { try { return expression.execute(context); } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, expression); } } public static Object[] execute(final Expression[] expressions, final Context context) { int i = 0; final int len; final Object[] results = new Object[len = expressions.length]; try { for (i = 0; i < len; i++) { results[i] = expressions[i].execute(context); } return results; } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, expressions[i]); } } public static Object executeSetValue(final ResetableValueExpression expression, final Context context, final Object value) { try { return expression.setValue(context, value); } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, expression); } } public static void execute(final Statment statment, final Context context) { try { statment.execute(context); return; } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, statment); } } public static void executeInverted(final Statment[] statments, final Context context) { int i = statments.length; try { while (i != 0) { --i; statments[i].execute(context); } return; } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, statments[i]); } } public static void executeInvertedAndCheckLoops(final Statment[] statments, final Context context) { int i = statments.length; //assert >0; final LoopCtrl ctrl = context.loopCtrl; try { do { --i; statments[i].execute(context); } while (i != 0 && ctrl.getLoopType() == LoopInfo.NO_LOOP); return; } catch (Throwable e) { throw ExceptionUtil.castToScriptRuntimeException(e, statments[i]); } } public static Expression optimize(Expression expression) { try { return expression != null && expression instanceof Optimizable ? (Expression) ((Optimizable) expression).optimize() : expression; } catch (Throwable e) { throw new ParseException("Exception occur when do optimization", e, expression); } } public static Statment optimize(Statment statment) { try { return statment != null && statment instanceof Optimizable ? ((Optimizable) statment).optimize() : statment; } catch (Throwable e) { throw new ParseException("Exception occur when do optimization", e, statment); } } public static List<LoopInfo> collectPossibleLoopsInfo(Statment statment) { return (statment != null && statment instanceof Loopable) ? ((Loopable) statment).collectPossibleLoopsInfo() : null; } public static List<LoopInfo> collectPossibleLoopsInfo(Statment[] statments) { int i; if (statments != null && (i = statments.length) > 0) { LinkedList<LoopInfo> loopInfos = new LinkedList<LoopInfo>(); List<LoopInfo> list; do { --i; if ((list = collectPossibleLoopsInfo(statments[i])) != null) { loopInfos.addAll(list); } } while (i != 0); return loopInfos.size() > 0 ? loopInfos : null; } return null; } public static LoopInfo[] collectPossibleLoopsInfoForWhileStatments(Statment bodyStatment, Statment elseStatment, int label) { List<LoopInfo> list; LoopInfo loopInfo; if ((list = StatmentUtil.collectPossibleLoopsInfo(bodyStatment)) != null) { for (Iterator<LoopInfo> it = list.iterator(); it.hasNext();) { if ((loopInfo = it.next()).matchLabel(label) && (loopInfo.type == LoopInfo.BREAK || loopInfo.type == LoopInfo.CONTINUE)) { it.remove(); } } list = list.isEmpty() ? null : list; } if (elseStatment != null) { List<LoopInfo> list2 = StatmentUtil.collectPossibleLoopsInfo(elseStatment); if (list == null) { list = list2; } else if (list2 != null) { list.addAll(list2); } } return list != null && list.size() > 0 ? list.toArray(new LoopInfo[list.size()]) : null; } }
STACK_EDU
Fixing arrow keys in IEX in TMUX session I use TMUX and vim for everything, and recently started working with Elixir. Whenever I run an elixir process, including iex -S mix, I cannot use the error keys as it instead prints out ^[[A for the up arrow, ^[[B for the down arrow, etc. How can I fix TMUX or iex so they can properly communicate the arrow keys? EDIT 1: Output of echo $TERM is tmux-256color. My terminals are set up following this tutorial: https://medium.com/@dubistkomisch/how-to-actually-get-italics-and-true-colour-to-work-in-iterm-tmux-vim-9ebe55ebc2be I have three machines set up with the same terminals and same config files (shared by a GitHub repo). On two machines (one iMac, one MacBook Prop) the up arrow works and on one machine (MacBook Air) the up arrows don't. How can I go about finding what is not working on the one machine? EDIT 2: Elixir and erlang versions are the same: Erlang/OTP 22 [erts-10.5.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe] [dtrace] Elixir 1.9.1 (compiled with Erlang/OTP 22) What is the output of echo $TERM inside of TMUX? @JonasDellinger edited the original question with more info mh sadly this doesn't seem like the culprit :/ Which login shells are used on your machines? Are the settings in iTerm -> Preferences -> Profiles Tab -> General section different? The limitation is probably in iex. The usual workaround would be rlwrap. @ThomasDickey you are my hero! That worked great, if you want to answer the question I'd be happy to mark it as solved I had a similar problem to that, tried everything: erl flags, different installs of erlang ane elixir, rlwrap (its just not as good) etc. But the problem fixed itself on the latest upgrades I did on my arch linux install, i will probably never know the culprit Elixir's interactive shell iex does not know about arrow-keys (for command-history) by default. According to its documentation, you can enable that when starting it: start with a parameter: iex --erl "-kernel shell_history enabled" start with an environment variable: export ERL_AFLAGS="-kernel shell_history enabled" As an alternative, rlwrap can be used for this (and other programs).
STACK_EXCHANGE
January 15th, 2004, 09:53 AM Quoting a price When I first started my web design business I was building your average web site (HTML, DHTML, CSS, etc.). Now that I'm building more and more database driven web applications, I'm running into the classic 'scope creep' problem. My customers are asking for small things that are beyond the scope of the initial quote. By themselves these things are no big deal to add, but all together they amount to quite a bit of extra work. I'm sure this is not a new concept for most of you out there, but I think my problem is in my price quoting procedures. I first meet with a prospective client to go over the basic requirements for their web site. After that initial meeting, I put together a quote and we meet again to discuss it. After the quote is accepted, I draw up a contract based on that quote. The problem is we don't work out the detailed requirements, functionality, and business logic until after the contract is signed, and that's the time I realize the quote is too low. Obviously, if there was a formal Request For Proposal I would have much of that information up front. Unfortunately, most of my clients don't provide them. I guess my question is how do I get all the information needed to provide an accurate quote before the contract is signed. I can't imagine a client spending the time necessary to meet with me to go over the detailed requirements and business logic just to get a bid. Does anyone have any suggestions for a better procedure? I would appreciate any advice. Thanks. January 16th, 2004, 07:09 AM Is it in the contract that any changes to the original scope of work might affect the price? if this is the case, only provide a rough estimate. And request 50% or whatever you need down. And let them know that this will be applied to the total amount when done. In the contract, make sure that it spells out $1,000 is an estimate & not the total cost. The $1,000 might be lower or higher as the work goes on. Plus before the contract is signed - try to finalize the scope of work. I do one or two of these a week. once you get more under your belt - you will be better at writing all of this out & see more additions to the scope as you go along. January 16th, 2004, 08:43 AM I do require 50% up front and state that changes and additions may affect the price, so I'm covered legally. But, I find it hard to say something is out of scope and charge the client extra when it's usually the business logic and not additional requirements that's making things more complicated than I originally estimated. But your saying I should provide a bid, and after it's accepted by the client, work out the detailed requirements and business logic before drawing up the contract. Then base the final price on that. This does make sense. Thanks! Do you find that your clients go along with this process easily? It seems to me that some will think of it as a kind of bait and switch tactic where you give them an estimated price and when they decide to go with you there is another step where the price will more than likely increase. Thanks for your help, Corey. January 16th, 2004, 10:31 AM Actually I won't even give them a quote until I start a scope of work. As for the reasons you have pointed out. This way - it makes them feel special & they feel that they are the only client at that time. Once I get the basics down & go over it with them - I ask them - if there is anything else that you want the site to do. For example - we mainly build custom shopping carts. So I have a generic one that I send to them with their name. They (75% of the time) e-mail or call me to tell me that is exactly what they want. I verify. They say yes. And I say OK - I was only asking because I did not know if you wanted to offer an affiliate program or maybe a coupon/discount program. They usually tell me yes - they would like the coupon program. I then tell them OK - I can add that in. And then I repeat. Maybe add something else. I do not want to or try to make them feel stupid, but I want them to think what they want in their cart. They then usually spend a day or two on the web looking at other carts. They e-mail me & tell me to add in this or that. And then once it is over - they get a quote. I might have wasted my time, but normally the customers all appreciate the extra step that I go to to help them understand a bit about their website. January 19th, 2004, 12:52 PM I am definitely going to have to agree with Corey here. The customers will notice the extra time you are taking up front and that will play into their minds big time. I also spend more time up front to try and work out most of the little things before I quote, and definitely before a contract is signed. But there will always be little things that come up. April 8th, 2004, 01:22 PM Exploration + project billing I have taken the approach that I give potential clients 1 hour of my time to discuss their projects. At that point, before they sign a project agreement, I offer them assistance as a consultant, in helping them define their project. For the opportunity I discount my hourly rate by 25%-50% for up to 3 hours' help. During that time I provide high-value advice on the best way to scope and design their project. At the end *I have written their RFP* for them - they always use my firm for the work, but the agreement reflects the RFP's scope. They are also much more web-saavy by then. This way there are several advantages: * Client demonstrates good faith and seriousness. * Developer demonstrates genuine desire to help them succeed. * Developer gets chance to show off proj. mgmt. skills. * Developer gets basic expenses met. * Developer is not taken advantage of. I have found that if a client *isn't* willing to pay the discounted rate for help in scoping their project, then they are going to be TROUBLE down the road. April 30th, 2004, 01:24 AM I am new to doing this as a business but this is the approach that I take. You might even compose a questionnaire for clients to help them figure what they need. From that you could prepare your estimate. Originally Posted by qapla97
OPCFW_CODE
How should the process for mods reopening a question differ? Recently "When to use photos or illustrations in design?" (image of closed question here for historical reference) was closed by 5 high rep (for GD.SE) users including myself for being too opinion based and broad for our site. 4 hours later it was reopened by a single mod without any edits, comments, or discussion. Mods, of course, have this ability as part of their powers. When should this power of single-handedly reopening be used and should there be any procedure along with using it? For reference, a comparison for a different way to handle a similar situation is this meta question asked by a mod about the reasoning behind why a particular question was closed. I'm the one who reopened it and I make no apologies for it. To make things clear I didn't discuss it with fellow mods or anyone else. None were on at the time or maybe I would have, but I wasn't going to wait either. The discussions have already been had. Over and over. Related Meta Discussions on Opinion Based Questions I'm struggling to figure out why people vote the way they do regarding Opinion / Too Broad Should we consider "how to visually represent..." questions on-topic? Why is this question about Paper Selections 'too broad'? A Critique, a Dupe and a Tech Support walk into a bar So now the answer to your question, when should a mod use their power. When they see fit as PieBie said is the answer. But where PieBie says a discussion should be had and comments left I'll differ and say - a discussion was already had. Furthermore, in the exact example that you mention not one of the 5 people that voted to close it left any comment to the person that asked with a concern about it. Likewise on the question 3 weeks ago (What paper should a design print shop have on hand? whcih led to Why is this question about Paper Selections 'too broad'?) not one person that voted to close came into meta to discuss it. But what feedback have we received as mods, during the election a number of members encouraged the mods to be more assertive with our moderation abilities. Well, I saw no reason for the question to be closed. I saw no effort on the part of those closing it to explain why they voted that way. In fact we've been digging into close votes and some people don't even seem to read the questions just mass Open or mass Close (which we're preparing to address on an individual level). We've already had the conversation about what is on-topic and what isn't. Part of my role as a moderator is to oversee those reasons. Where you see it as overstepping I see it as protecting the years that we've put into this site. Look through the meta - over and over members have wanted more Why questions, more good subjective questions. There are questions I don't agree with - a lot of the questions about automation I don't really think belong here. But the community does so I don't vote to close them, I don't hammer them, I don't even downvote them usually. I let the users that want them have them. If you can't bring yourself to do that regarding questions you don't entirely agree with than I don't think this community is the right place for you. In summary you're asking why I reopened without discussion. Well there's been discussion I linked to a few of them above. I made one of them featured when I reopened it. The discussions been had. If members chose to ignore those discussions than that's on them. I'm not going to continue to ask the same questions to have fellow mods chime in then none of the non-mods comment and little change. The time for that has long past. It's been made clear that the community would like us moderators to do more because we don't have enough voters. So I stepped up and did. I appreciate that the other mods are stepping in and trying to justify this but at the same time, this isn't on them. It's on me. I did it and I'll tell you what - I fully intend to do it more regularly. I'm tired of fruitless discussions and watching good questions get closed. We mods understand that it's no fun to overturn y'all's decisions. Especially if five high-rep users gave their opinions, it kind of defeats the point of the 'self-moderation' of this community. That's why we don't like using this tool often, if at all. It has come to our attention, however, that reviewers regularly are voting to close questions that, even with the broadest interpretations, we can't see as being off-topic. Those votes are only rarely accompanied with comments on the why. It is this incomprehension that made me start the meta discussion that Zach links to. These are hardly exceptions, we routinely see baffling close reasons on questions that, although a bit rough, could be diamonds with some effort (no pun intended). We are very concerned about this, especially since it happens so often. We lose potentially good or great content for incomprehensible reasons. Please don't read this as an attempt to turn the blame around. We are honestly puzzled about the -- in our eyes -- inappropriate close reasons for what we see could be great content. It is the frequency of this happening that caused one of us to re-open this particular question without any comment or fanfare. Honestly, if we'd have to make a Meta post for every time this happens, we would have lots of these discussions on meta. That's not good for Meta, and it also creates a delay in the reopening of the question--one in which the Asker very well may have lost interest in their question. This doesn't seem to answer my question - that of how should this power should be used. It instead seems to focus on the current situation at hand, which I'm not addressing in the question (and is for other meta posts if desired) Maybe it doesn't, but it gives some insight into the how and the why. Besides, a mod is hardly eligible to actually answer this question, right? That would be hypocritical at the very least Given it's a discussion, I should hope that the mods provide input on the subject :P True, but as the objects of the discussion, with our behaviour clearly being criticised by it, we shouldn't steer it. @Vincent So you're saying the referenced question is not POB? I admit I am coming from Stack Overflow, but a question asking "When should I use X technique" is asking for opinions. Further, if you are noticing a spate of troubling closures, shouldn't you talk to the users in question and get their input, rather than passive-aggressively (and unilaterally) reverting their consensus decisions? @TylerH Please bear in mind that we aren't Stack Overflow (and we don't want to be either). Because graphic design (the activity) is only part science in addition to part art, we are way more lenient in accepting opinion-based questions. @TylerH: Also the capability of basing such decisions on something other than opinion is the reason why graphic designers can make a living and are not replaced by random-number generators. @Vincent I understand that, of course. Considering it, however, should the verbiage on the close reason for POB questions perhaps change, then? If it's not an accurate representation of your site's policies, and it's clearly causing confusion/concern here, it should be adjusted. @Wrzlprmft Sure, but it's still an opinion of what technique is better or what picture looks nicer. Any field can fall under that description. Even exact sciences have room for both elegant and clunky solutions. @TylerH If you think so, by all means, write a meta post to that effect. @TylerH Refering to the question that triggered this, it would be wrong to dismiss it as being a question of opinion or of what "looks nicer". There are objectionable reasons behind making a design decision on using pictorial vs illustrative elements in a project. Applying SO practices to a design stack isn't in anyway useful. It's part of the reason why this stack has become overloaded with Tech Support questions IMO. @Vincent Done https://graphicdesign.meta.stackexchange.com/questions/3242/should-the-opinion-based-close-reasons-description-be-adjusted-for-graphic-desi When should this power of single-handedly reopening be used and should there be any procedure along with using it? Just a few points that come to mind: You can't know if the action was really single-handed. A mod action is by its very nature single-handed (since their hammer is very big), that doesn't mean there was no discussion behind the scenes and the other mods (and even other users) don't agree. As to the question of when this power should/could be used: the only correct answer to me seems to be "whenever a mod sees fit". They're elected mods and have this power for a reason. As to whether there should be procedure: I think there already is a procedure. Close votes come in, Q goes to CV queue, mod(s) see this and reopen (hopefully after a discussion). This might not be a procedure you're fine with, but it is a procedure. All that said, I do agree that this particular case could have been handled a bit better. A simple comment stating that the question was re-opened after a mod discussion, maybe with a link to this meta, would've made things clearer for all concerned. Obviously different questions and situations call for different responses and determining exactly what should happen is up to the person at the time, but I'd like to see some more transparency from the mods in general. In cases where it's obvious what questions should be reopened - say a question has been edited well in response to a comment or closure, there's been discussion by those who closed the question and they agreed on reopening it, etc. - mods should be free to reopen a question, no questions asked. But in cases where it's less clear and cannot be obviously reasoned from the question, revisions, or comments, and it wasn't just the mod closing the question single-handedly in the first place, mods should leave a clarifying comment at the very least, discuss the question in chat, or, if they still don't understand well why a question was closed the way it was, make a meta post about the question's closure before reopening it. That way the community is still involved and it doesn't appear to the community that the mods are single-handedly moving GD in a certain direction. In any case, the mods should want the community to agree on them, because that will assure that moderation will be consistent and people are more likely to want to help moderate if they understand the site's closure policies. Being more transparent would help the mods have more community support.
STACK_EXCHANGE
WIP: Extended sources in FermipyLike Not quite ready to merge. Needs https://github.com/threeML/astromodels/pull/146 What works (caveat: still needs more testing): Implement extended sources with Disk_on_sphere, Gaussian_on_sphere, or SpatialTemplate_2D morphology for FermipyLike plugin. Allow fitting the flux/spectral shape. Allow fitting of extension & source position for the two former shapes. What doesn't work yet: Other morphologies. We could probably implement something that'd convert other 2D/3D functions to fits files so that fermipy/fermi tools can read them back in. That would mean keeping track of the parameter values from the last step so we'd know whether to regenerate the fits files. Might end up in a separate pull request later. Getting the morphology/extension information from 4FGL(-DR2). I can't find the extended source table in the virtual observatory. We might have to add the option to read the catalog from disk (fits file) instead, as it is done in fermipy. I'll work on that if necessary. Missing equivalent for free_point_sources_within_radius for extended sources. Needs unit tests. Other small changes/additions: In free_point_sources_within_radius, keep the pivot energy fixed. Codecov Report Merging #443 (23c5732) into master (53d30d5) will decrease coverage by 1.35%. The diff coverage is 35.91%. @@ Coverage Diff @@ ## master #443 +/- ## ========================================== - Coverage 73.84% 72.48% -1.36% ========================================== Files 115 115 Lines 13868 13990 +122 ========================================== - Hits 10241 10141 -100 - Misses 3627 3849 +222 I think this is ready for review now. (tests are failing because the updates implemented in https://github.com/threeML/astromodels/pull/146 haven't made it to conda yet. We should tag astromodels ) Codecov Report Merging #443 (23c5732) into dev (51bfd4e) will decrease coverage by 1.15%. The diff coverage is 35.91%. :exclamation: Current head 23c5732 differs from pull request most recent head 92563a9. Consider uploading reports for the commit 92563a9 to get more accurate results @@ Coverage Diff @@ ## dev #443 +/- ## ========================================== - Coverage 73.63% 72.48% -1.16% ========================================== Files 117 115 -2 Lines 14814 13990 -824 ========================================== - Hits 10909 10141 -768 + Misses 3905 3849 -56
GITHUB_ARCHIVE
Kalvi is pioneering a new era of technology education with work-integrated degree programs in partnership with top-ranked universities and leading tech enterprises globally. We have raised a seed round from 30+ super angels and are backed by global tech leaders. We are looking for passionate individuals who can ideate, innovate and resonate with our vision and help Kalvi reach new heights. There are no boundaries we are afraid to explore and no stones we won’t upturn to solve the current skill gap perpetuated by antiquated engineering curriculums. About the Role This is a unique product developer opportunity, where you live on campus with the end user for whom we’re building our product: students. As a developer, you will work with our team and engage with our mission to change the higher education landscape of the country. As a mentor, you are going to be a role model for our future SSE’s, hence we are looking for developers who have at least 2+ years of experience with strong technical expertise as well as strong love for technology and upskilling students. Roles and responsibilities (Development) - Work in a fast paced environment focused on building a product towards achieving top student outcomes - You will own and implement flows on the product (full-stack) - Ideate, design and development of end to end user journeys - Interface with the team regularly and provide on the ground feedback towards product iteration Roles and responsibilities (Students) - You will embed yourself in our end users at our campuses, and use your expertise to prepare and deliver lectures, tutorials and workshops. - Perform student assessments as per lecture and syllabus requirements. - Provide your expertise and guidance to students who are learning development. - About 50% of the time would be spent on development, and 50% on student responsibilities. Basic Skill Requirement ● Good knowledge on front-end technologies ● Good knowledge on back-end technologies - Node.js, Express.js Jest, Mocha ● Good knowledge on databases - NoSQL DB - MongoDB - Relational DB - Postgres, MySQL ● Good knowledge on DevOps - Building CI-CD pipelines - Interest in teaching with ability to explain complex things in simple language - Prior experience of conducting offline Live Sessions preferred - Prior experience of leading teams towards project completion - Willingness to accept feedback - Excellent communication skills - Advanced English proficiency Lovely Professional University, Jalandhar. Relocation to the campus is a requirement for this position. Perks and Benefits: - Competitive salary and the opportunity to be part of an impactful movement to transform higher education for the better - Free accommodation and food when you’re on-campus - Challenging role designed to significantly enhance technical profile and skills. - An awesome work culture that helps you thrive - Opportunity to be part of a team that puts high focus on outcome-oriented technical coaching and develop the next generation of CTOs - Work with industry leaders with experience from companies like Google, SAP, Adobe, etc. Apply for this position Login with Google or GitHub to see instructions on how to apply. Your identity will not be revealed to the employer. It is NOT OK for recruiters, HR consultants, and other intermediaries to contact this employer
OPCFW_CODE
Because ntpd was replaced by chrony in Votiro Cloud v9.6.174, you may need to configure NTP using the steps below. |1.||Verify the currently used service/daemon (ntpd or chronyd) for NTP by running the commands below:| systemctl list-units --type=service -all | grep ntpd systemctl list-units --type=service -all | grep chrony |t||If ntpd is disabled and chronyd is used, the command outputs should like this:| |t||If ntpd is active, run the following commands to disable ntpd:| systemctl stop ntpd.service systemctl disable ntpd.service |2.||To check if the clock is synchronized, run the following command:| timedatectl | grep synchronized |t||If synchronized, the command output should display synchronized: yes, as shown:| |t||If it’s not synchronized, troubleshoot using the following steps:| |||Check the chrony service status by running one of the following commands (the output is the same):| systemctl status chronyd systemctl status chrony.service |||Start/restart the chrony service/daemon using one of the following commands:| systemctl restart chronyd systemctl restart chrony.service |||If the service is running, run the following command to verify the synchronization of the local system with the reference server:| |||Run the following command to display information about the current time sources that chronyd is accessing:| chronyc sources -v |||To display the information about the drift rate and offset estimation process for each of the sources listed by chronyd, run the following command:| |||To edit the chrony configuration, run the command:| For example, with public servers: Note After each action or saved change on the chrony.conf file, a service restart is required. Troubleshooting Example: NTP not synchronized with external server Although all servers were configured properly, when running the sources command, “last sample” showed a gap of 10.8s between the servers as shown: To resolve this behavior, we added a parameter called “maxdistance” with a value of 15 to mitigate this gap. Root cause: in the "chrony sources" output, "+/- 10.8 s" is larger than the default “maxdistance” of 3 seconds (if not part of the chrony.conf). The maxdistance parameter was added in chrony-2.2, so that's why it worked with chrony-2.1. Older versions only have a hardcoded limit for the root dispersion to be smaller than 16 seconds. The NTP server has a root dispersion of about 3.6 seconds.
OPCFW_CODE
Tuplets in music Anybody out there familiar with tuplets? I understand the concept behind them but I've been told that a duplet plays an opposite role from the other types of tuplets (fitting 2 smaller notes in the place of 3, instead of 2 notes in the place of one) why is this? And are there any other types of tuplets that follows this "opposite" pattern ? There's some terminology confusion here. A tuplet is simply fitting some number of notes in the space of another amount of notes. You can always represent a tuplet as some kind of ratio. A duplet is a specific kind of tuplet where 2 notes take up the space typical given to 3 and is represented as a 2:3 ratio. A triplet is a specific kind of tuplet where 3 notes take up the space typical given to 2 and is represented as a 3:2 ratio. As you can see the duplet and the triplet are highly related and have there ratios reversed showing they are complementary operations. There are many(if not infinite) tuplets and if the ratio is reversed, the idea is the same. Here is a picture of a 5:4 tuplet (known as a quintuplet) and a 4:5 tuplet: As you can see when reversing the tuplet the results are much different. I've never seen anything like your second bar example. It looks very confusing to me or, at least, far from obvious. Do you know of any mainstream pieces where such a thing has been used? Where a tuplet group replaces a beat (or possibly two beats) a single number may be understandable. The 4:5 example above certainly needs the full ratio to be shown. The main purpose of tuplets is to fit notes that cannot be represented by a power of two. Duplets are sort of an odd-man out here since they always can actually be written without resorting to tuplets in the power-of-2 based modern notation (mensural notation, in contrast, could have something like three semibreve notes per breve), so their main use is to picture the nature of a strong counterrhythm more vividly. Apart from duplets, the usual convention is to put in more notes in a tuplet than usual, but not more than twice as many. However, this rule becomes shaky for septuplets: some composers prefer writing with a ratio of 7:8 rather than 7:4. When getting into such larger ratios, it is not unusual to indeed write "7:4" (a colonized fraction) as the tuplet indication rather than just "7". With more esoteric tuplets, this explicit notation becomes more common. It's only opposite in that the length relationships are reversed. Instead of (for example) "three in the space of two" one has "two in the space of three." The underlying pulse is the same. For example, in 2/4 time with quarter note=120, the basic rate is 1 measure (2 quarter notes) per second. A quarter note triplet would have 3 quarter notes per second for the duration of the triplet notation (assuming three quarter notes per half note). The reverse of the previous procedure would be to write two notes (usually half notes) in the space of three (perhaps in a 3/4 measure.) Of course, one could write dotted quarters (or even use tied quarter-eighth and eighth-quarter patterns) to achieve the same note relations. I'd probably do the latter for two in the space of three, but other triplets (5 in the space of 4 or 2 or 1 or 4 in the space of 3 or the like) are often done with tuplets.
STACK_EXCHANGE
GearVR Controller Support? I know this is not specifically mentioned as supported, but I tried out the GearVR Controller with a GearVR and a Samsung S6. Trying the demo page on this project, I got the following results: Using the new(ish) built in Internet browser on Oculus home - the controller is visible and tracks/points well, but none of the buttons work - I can't interact with the UI elemental via clicking. Using Samsung Internet with WebVR enabled, I get a black screen when loading the page. Let me know if there is any useful debugging info I can provide for this controller. Cheers! - James I would love to support the GearVR controller! This may be a bit of a pain, but can you download and open Chrome on your phone, plug your phone into your computer, then connect your phone’s open Chrome tab to your computer’s like so: https://developers.google.com/web/tools/chrome-devtools/remote-debugging/ Once you do that you can interact with the JavaScript console. Load up the demo site and then in your console paste and enter this to enable full verbose mode: THREE.VRController.verbosity = 1 Now when you interact with the trackpad or buttons you should get a lot of output in the console. What you need to do is interact with each piece of the controller and see what button index # is receiving that interaction. My guess is the thumpad has a touch, a press, and axes and is index #0. See if that’s what’s showing up in the console and also check out the other buttons. We can use that info to add in explicit support :wink: Try to be thorough and keep notes. It’s the subtleties that make all the difference! (Does this button have a press state? A touch state? An analog value—and if so at what point does a press begin and is that the same threshold for ending a press? Does +1 on the Y axis mean top or bottom? And so on.) Here’s what I did as an example: https://github.com/stewdio/THREE.VRController/blob/master/VRController.js#L451 I also just made a super tiny update that should allow your GearVR primary button to work with the example’s "primary press began" event listener even though we haven’t explicitly mapped the buttons yet: https://github.com/stewdio/THREE.VRController/blob/master/VRController.js#L95 If you can, give that a go and tell me if it works. It makes a pretty big assumption that the primary button is going to be at index === 0 in the buttons array but that’s what I’m seeing with the Vive, Oculus, and Daydream ... so I’m hoping that’s the norm! I haven't done the full logging yet, but I can confirm I can now: Open https://stewdio.github.io/THREE.VRController/ on the built in browser Click enter VR The main button (clicking the trackpad) works. I can use it to drag the UI and interact with the elements. The button does not work outside VR mode but this is probably expected. Wow that is awesome! (Thank you for testing it out!) When you say the button does not work outside VR mode, are you able to see the controller at all before entering VR? If you don’t see it at all then I think what might be happening is that prior to calling vrDisplay.requestPresent() the GearVR does not engage its VR internals and connect to the controller. (But this is just a guess! It would be similar to how Daydream seems to operate.) I can see the controller outside the VR mode, but what I am seeing is the GearVR interface and their rendered representation of the controller. The browser is just a floating window inside the GearVR interface/UI. I can point to items in the scene, they highlight the item I am pointing at them (e.g. the dat.guiVR floating menu) but the click is ineffective on the scene - I can't click the elements, drag the UI, etc - when outside VR mode. Within VR mode everything works as expected. I guess the above is not that surprising but just an observation. I'll check this out presently. I had to fork this project to start working on GearVR support. it should work like the DayDream one. So this makes the thumbpad the primary not the trigger. 'Gear VR Controller': { style: 'gearvr', // THUMBPAD // Both a 2D trackpad and a button with both touch and press. // The Y-axis is “Regular”. // // Top: Y = -1 // ↑ // Left: X = -1 ←─┼─→ Right: X = +1 // ↓ // Bottom: Y = +1 axes: [{ name: 'thumbpad', indexes: [ 0, 1 ]}], buttons: [ 'thumbpad' ], primary: 'thumbpad' }, @danrossi are you still planning to do a PR to add this support? I added Oculus Go yesterday and can do this as well if you no longer want to do so. @paulmasson I am using Oculus Go with this controller: https://developer.oculus.com/documentation/unity/latest/concepts/unity-ovrinput/#unity-ovrinput-go-controller However, I cannot find which event for touchpad event (touch position or swipe action?) @whatisor if you have the Go connected to your computer and have set THREE.VRController.verbosity = 1 in the Chrome JavaScript console, then you'll see all the events. You're probably looking for thumbpad axes changed. @paulmasson Thank you, I found it as "axis changed" from aframe sample.
GITHUB_ARCHIVE
How to disable debug console output generated by winston elasticstack client and transport? I'm using winston v3.2.1, winston-elasticsearch v0.8.8 and @elastic/elasticsearch v7.6.1 to push log entries for my NodeJS services to an Elastic Search cluster v7.6.2. My logger is constructed as follows (see https://github.com/vanthome/winston-elasticsearch and https://www.npmjs.com/package/@elastic/elasticsearch): import { Client } from '@elastic/elasticsearch'; import winston, { Logger } from 'winston'; import { ElasticsearchTransport } from 'winston-elasticsearch'; import { ELASTIC_HOST, ELASTIC_PORT, LOG_LEVEL } from './environment'; ... this.logger = winston.createLogger({ transports: [ new ElasticsearchTransport({ client: new Client({ node: `http://${ELASTIC_HOST}:${ELASTIC_PORT}` }), index: 'sector', level: LOG_LEVEL // Events I log that are 'info' or worse will be transported. }) ] }); I then log using the logger reference directly: this.logger.info(`Kill signal received: ${signal}`); I can see that my log entries are being pushed to the Elastic Search cluster but the console logs are flooded with debug output from the elasticsearch and winston:elasticsearch loggers. I think they belong to the Elastic Node client and Elasticsearch Transport implementations. 2020-05-12T22:10:10.114Z elasticsearch Nothing to resurrect ... 2020-05-12T22:10:10.116Z winston:elasticsearch starting bulk writer ... 2020-05-12T22:10:18.122Z winston:elasticsearch nothing to flush 2020-05-12T22:10:20.123Z winston:elasticsearch tick 2020-05-12T22:10:20.123Z winston:elasticsearch nothing to flush My services run inside Docker containers and I don't want the Docker logs to flood with debug noise. I tried to set the transport levels to error and even removing the Console transport completely but the debug noise persists. I tried the following to suppress the console output without much luck: winston.level = 'error'; winston.transports.Console.level = 'error'; winston.transports.Console.silent = true; this.logger.remove(winston.transports.Console); winston.remove(winston.transports.Console); I found a couple of threads on the subject but no luck: https://github.com/winstonjs/winston/issues/175 Disable winston logging when running unit tests? It's almost like the client and transporter is using a separate logging mechanism. Any suggestions? I've confirmed that the output was not generated by winston at all. That's why I couldn't disable it using the approaches I listed above. It looks like the ElasticsearchTransport implementation uses a package called debug. It turns on when a pattern is specified in the DEBUG environment variable. I had it set to DEBUG=* so that I could see Express output. The debug noise stopped when I removed the environment variable. If you see debug noise in your console, check that the debug package is not the cause. Many packages appear to use it.
STACK_EXCHANGE
204 status code not handled properly Exception raises when route returns response with 204 No Content status. INFO: <IP_ADDRESS>:58657 - "DELETE / HTTP/1.1" 204 No Content ERROR: Exception in ASGI application Traceback (most recent call last): File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlite/app.py", line 149, in __call__ await self.middleware_stack(scope, receive, send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 84, in __call__ await self.app(scope, receive, send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlite/asgi.py", line 70, in __call__ await route.handle(scope=scope, receive=receive, send=send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlite/routes.py", line 161, in handle await response(scope, receive, send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlette/responses.py", line 167, in __call__ await send({"type": "http.response.body", "body": self.body}) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 462, in send output = self.conn.send(event) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/h11/_connection.py", line 510, in send data_list = self.send_with_data_passthrough(event) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/h11/_connection.py", line 543, in send_with_data_passthrough writer(event, data_list.append) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/h11/_writers.py", line 65, in __call__ self.send_data(event.data, write) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/h11/_writers.py", line 91, in send_data raise LocalProtocolError("Too much data for declared Content-Length") h11._util.LocalProtocolError: Too much data for declared Content-Length During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 366, in run_asgi result = await app(self.scope, self.receive, self.send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__ return await self.app(scope, receive, send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlite/app.py", line 151, in __call__ await self.handle_exception(scope=scope, receive=receive, send=send, exc=e) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlite/app.py", line 165, in handle_exception await response(scope=scope, receive=receive, send=send) File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/starlette/responses.py", line 160, in __call__ await send( File "/private/tmp/starlite-fetch-error/venv/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 452, in send raise RuntimeError(msg % message_type) RuntimeError: Expected ASGI message 'http.response.body', but got 'http.response.start'. This example fails: from starlite import CORSConfig, Starlite, delete @delete() async def route() -> None: return None app = Starlite(route_handlers=[route], debug=True) This one does fail too. Additionally, Starlite resolved 204, but there is some content (200): from starlite import CORSConfig, Starlite, delete @delete() async def route() -> str: return "Hello!" app = Starlite(route_handlers=[route], debug=True) Starlette works fine, so the problem is definitely with Starlite: from starlette.applications import Starlette from starlette.responses import PlainTextResponse from starlette.routing import Route async def route(request): return PlainTextResponse("", status_code=204) app = Starlette( debug=True, routes=[Route("/", route, methods=["DELETE"])], ) Starlite resolved 204, but there is some content (200): There's not really any resolving afaik, if you don't specify different, the status of the response is dynamically set in HTTPRouteHandler.__init__(): if status_code: self.status_code = status_code elif isinstance(self.http_method, list): self.status_code = HTTP_200_OK elif self.http_method == HttpMethod.POST: self.status_code = HTTP_201_CREATED elif self.http_method == HttpMethod.DELETE: self.status_code = HTTP_204_NO_CONTENT So you'd need @delete(status_code=200). As far as the empty response goes, I've added a failing test here: https://github.com/starlite-api/starlite/compare/main...issue-154-204-none-response Inspecting the response in the test shows that response.content is b'null', which just smells to me like the None response from the handler is getting handed straight to orjson.dump(). null is a valid json response, but I think it's pretty safe to assume that if a handler is configured to return a 204 and the handler returns None that the developer wants that to be an empty response. So you'd need @delete(status_code=200). Maybe 204 status should be resolved based on SignatureModel? If return type annotation is None, then set 204? So you'd need @delete(status_code=200). Maybe 204 status should be resolved based on SignatureModel? If return type annotation is None, then set 204? No, these are API semantics we can't enforce. What about defaulting to 200? Let the developer decide if 204 should be returned. What if we allow the Starlite constructor to receive a mapping of http verbs to response status, and fallback to current behavior if not provided. So you could do: Starlite(..., default_response_codes={"DELETE": HTTP_204_NO_CONTENT}) We return the deleted object from DELETE as sometimes it can be useful, so I've wrapped starlite.delete locally to override that to be 200 in all cases. Having a way to define those defaults at the application level seems logical. This is really a different issue though, want to open a new one? It is not really useful for me. The main issue is handling None with 204 OK well that's already sorted and will be in the next release.
GITHUB_ARCHIVE
about:killAllOnCompletedTaskNumber has been obsoleted Hi, I am trying to submit a job by the webportal. The json file is cifar10.json from "Quick start: how to write and submit a CIFAR-10 job". However, I got a error: "killAllOnCompletedTaskNumber has been obsoleted, please use minFailedTaskCount and "minSucceededTaskCount instead." I wonder how can fix the problem? Thank you! killAllOnCompletedTaskNumber has been removed from the current master branch's [Quick Start section] (https://github.com/Microsoft/pai/tree/master/examples#quickstart), feel free to update yours and give us more feedback. Ping @YanjieGao for a possible doc issue. If you deploy the latest bits from master branch, killAllOnCompletedTaskNumber has been obsoleted. If you use our stable release 0.6.y, etc. It has not yet. We suggest you deployment our stable release for the moment. current branch doc and example doesn't show killAllOnCompletedTaskNumber to user, i guess @boozyguo write killAllOnCompletedTaskNumber directly in job json file ? @YanjieGao , thank you for help. I use the json file from "Quick start: how to write and submit a CIFAR-10 job". the json file content is below: { "jobName": "tensorflow-cifar10", "image": "openpai/pai.example.tensorflow", "dataDir": "/tmp/data", "outputDir": "/tmp/output", "taskRoles": [ { "name": "cifar_train", "taskNumber": 1, "cpuNumber": 8, "memoryMB": 32768, "gpuNumber": 1, "command": "git clone https://github.com/tensorflow/models && cd models/research/slim && python download_and_convert_data.py --dataset_name=cifar10 --dataset_dir=$PAI_DATA_DIR && python train_image_classifier.py --batch_size=64 --model_name=inception_v3 --dataset_name=cifar10 --dataset_split_name=train --dataset_dir=$PAI_DATA_DIR --train_dir=$PAI_OUTPUT_DIR" } ] } There is no killAllOnCompletedTaskNumber in json file. However, the "submit job" webportal has the killAllOnCompletedTaskNumber filed. @boozyguo you could refer fan's answer. use the stable release of pai Hi @hao1939 , dev-box docker is use master branch for deployment, this may lead to customer use unstable version problem. Is it better each release use the specific release version of PAI? ![image](https://user-images.githubusercontent.com/5576848/44566584-ed248680-a7a0-11e8-9efa-f01a0d655a14.png @boozyguo please go to https://github.com/Microsoft/pai/releases to checkout a latest release. thank you. @fanyangCS @YanjieGao So, I should download PAI v0.6.1 in dev-box, then prepare quick-start.yaml and run paictl.py? Hi @boozyguo , You should match the image tags and code branch. In case you are using currently latest release v0.6.1: In dev-box, make sure checkout the code with tag v0.6.1 In config file services-configuration.yaml, set the docker-tag: v0.6.1 If you want to update the config, recommend fellow steps: In dev-box, checkout the tag v0.6.1 generate the config (make sure generate in another directory, it won't override the previously config) In config file services-configuration.yaml, set the docker-tag: v0.6.1 stop old installation, using paictl.py service stop ... start with new config, using paictl.py service start ... That's all. Hi @boozyguo , You should using python paictl.py cluster generate-configuration .... There are break changes after v0.6.1. So please refer to the doc under the branch pai-0.6.y: https://github.com/Microsoft/pai/blob/pai-0.6.y/pai-management/doc/single-box-deployment.md thank you @hao1939 I have try : https://github.com/Microsoft/pai/blob/pai-0.6.y/pai-management/doc/single-box-deployment.md https://github.com/Microsoft/pai/blob/pai-0.6.y/pai-management/doc/cluster-bootup.md however, I got another error, deploy failed. ......... 2018-08-24 13:15:35,871 [ERROR] - k8sPaiLibrary.maintainlib.common : There will be a delay after installing, please wait. Segmentation fault 2018-08-24 13:15:35,874 [ERROR] - k8sPaiLibrary.maintainlib.common : There will be a delay after installing, please wait. Segmentation fault 2018-08-24 13:15:35,878 [ERROR] - k8sPaiLibrary.maintainlib.common : There will be a delay after installing, please wait. Segmentation fault 2018-08-24 13:15:35,880 [ERROR] - k8sPaiLibrary.maintainlib.common : There will be a delay after installing, please wait. Segmentation fault 2018-08-24 13:15:35,884 [ERROR] - k8sPaiLibrary.maintainlib.common : There will be a delay after installing, please wait. Segmentation fault .............. Hi @boozyguo , The log is some kind of misleading, actually it's waiting for the pod to be ready. Could you check the k8s dashboard in the same time? It's on http://your_master_ip:9090. Hello, @hao1939 I have try other version of PAI, but there always are some network problem(can not pull images from gcr.io or docker.io). So, I choose the latest version. At "submit job" webpage, I found a label named "Properties". When click it, there are some items including "killAllOnCompletedTaskNumber". After uncheck the "killAllOnCompletedTaskNumber" item, the job can submitted succed. I consider there is a bug between "webportal" and "api-server" or another parts of PAI. They have a different version. Thank you for help. Hi @boozyguo , You are right, we fixed the network problem on the release v0.6.1, because in some region gcr.io is not reachable. 'api-server' and 'webportal' do have different version. And you can check the images tag on k8s dashboard. In you case, it should be docker.io/openpai/rest-server:latest. Hi @boozyguo , I will close it for now, if you have more questions, feel free to reopen it. hello, @hao1939 thank you for your help. I got the method to deal with the problem.
GITHUB_ARCHIVE
Selecting Chinese only, Japanese only and Korean only records in mysql/php Is there a way to select in mysql words that are only Chinese, only Japanese and only Korean? In english it can be done by: SELECT * FROM table WHERE field REGEXP '[a-zA-Z0-9]' or even a "dirty" solution like: SELECT * FROM table WHERE field > "0" AND field <"ZZZZZZZZ" Is there a similar solution for eastern languages / CJK characters? I understand that Chinese and Japanese share characters so there is a chance that Japanese words using these characters will be mistaken for Chinese words. I guess those words would not be filtered. The words are stored in a utf-8 string field. If this cannot be done in mysql, can it be done in PHP? Thanks! :) edit 1: The data does not include in which language the string is therefore I cannot filter by another field. edit 2: using a translator api like bing's (google is closing their translator api) is an interesting idea but i was hoping for a faster regex-style solution. Transform your string into raw codepoints (e.g. UCS-4). 2) check each character if it's within your desired range. For CJK glyphs you may be lucky and they actually for one contiguous range (or at least only a handful). This is similar, but not identical to, http://stackoverflow.com/questions/1441562/detect-language-from-string-in-php Searching for a UTF-8 range of characters is not directly supported in MySQL regexp. See the mySQL reference for regexp where it states: Warning The REGEXP and RLIKE operators work in byte-wise fashion, so they are not multi-byte safe and may produce unexpected results with multi-byte character sets. Fortunately in PHP you can build such a regexp e.g. with /[\x{1234}-\x{5678}]*/u (note the u at the end of the regexp). You therefore need to find the appropriate ranges for your different languages. Using the unicode code charts will enable you to pick the appropriate script for the language (although not directly the language itself). A regular expression alone may prove to be remarkably ineffective given that the characters used are very similar. I think you would need to use, as a minimum, some sort of statistics. @Arafangion - Hangul characters are used by Korean only and Katakana characters only for Japanese. Only potential ambiguity if the Chinese characters where, admittedly, some second-order check might be required. @Arafangion - Indeed, but then as mentioned in my response, this enables the picking of the script and not directly the language. It may not be applicable as an entire solution - depending on the (unspecified) nature of the original poster's data and accuracy requirements. You're correct there, so your question may well be the solution that the OP picks, much to my chagrin. @borrible - accuracy requirements can be 80%-90%. Can the problem be solved with regex in such accuracy? @user831405 - It will depend on your data. If, for example, your Korean text only ever users Hangul then this will give you very accurate results for detecting Korean. You can't do this from the character set alone - especially in modern times where asian texts are frequently "romanized", that is, written with the roman script, that said, if you merely want to select texts that are superficially 'asian', there are ways of doing that depending on just how complicated you want to be and how accurate you need to be. But honestly, I suggest that you add a new "language" field to your database and ensuring that it's populated correctly. That said, here are some useful links you may be interested in: Detect language from string in PHP http://en.wikipedia.org/wiki/Hidden_Markov_model The latter is relatively complex to implement, but yields a much better result. Alternatively, I believe that google has an (online) API that will allow you to detect, AND translate a language. An interesting paper that should demonstrate the futility of this excercise is: http://xldb.lasige.di.fc.ul.pt/xldb/publications/ngram-article.pdf Finally, you ask: If this cant be done in mysql - how can it be done in PHP? It will likely to be much easier to do this in PHP because you are more able to perform mathematical analysis on the language string in question, although you'll probably want to feed the results back into the database as a kludgy way of caching the results for performance reasons. you may consider another data structure that contains the words and or characters, and the language you want to associate them with. the 'normal' eastern ascii characters will associate to many more languages than just English for instance, just as other characters may associate to more than just Chinese. Korean mostly uses its own alphabet called Hangul. Occasionally there will be some Han characters thrown in. Japanese uses three writing systems combined. Of these, Katakana and Hiragana are unique to Japanese and thus are hardly ever used in Korean or Chinese text. Japanese and Chinese both use Han characters though which means the same Unicode range(s), so there is no simple way to differentiate them based on character ranges alone! There are some heuristics though. Mainland China uses simplified characters, many of which are unique and thus are hardly ever used in Japanese or Korean text. Japan also simplified a small number of common characters, many of which are unique and thus will hardly ever be used in Chinese or Korean text. But there are certainly plenty of occasions where the same strings of characters are valid as both Japanese and Chinese, especially in the case of very short strings. One method that will work with all text is to look at groups of characters. This means n-grams and probably Markov models as Arafangion mentions in their answer. But be aware that even this is not foolproof in the case of very short strings! And of course none of this is going to be implemented in any database software so you will have to do it in your programming language.
STACK_EXCHANGE
Engineering teams are the backbone of any software development project. They are responsible for designing, building, testing, and deploying high-quality software solutions that meet the customers' and stakeholders' needs and expectations. However, not all engineering teams are equally effective and productive. Some teams may suffer from bad practices that hinder their performance and compromise results. In this blog post, we will discuss some of the common bad practices of an engineering team and how to solve them. Anti-Pattern #1: Lack of clear goals and priorities One of the most important factors for a successful engineering team is to have clear and shared goals and priorities. Without them, the team may waste time and resources on irrelevant or low-value tasks, lose focus and direction, and fail to deliver what the customers want. To avoid this, the team should: - Define the project's scope, objectives, and success criteria in collaboration with the customers and stakeholders. - Break down the project into manageable tasks and assign them to different team members according to their skills and availability. - Use a project management tool or a kanban board to track the progress and status of each task and identify any dependencies or blockers. - Communicate regularly with the customers and stakeholders to get feedback, validate assumptions, and adjust the goals and priorities as needed. For example, a team working on a web application for an e-commerce platform should have a clear vision of what features and functionalities they need to deliver. Additionally, they should know how to measure their success and what deadlines and milestones they must meet. Clear division of labour among the front-end developers, back-end developers, testers, designers, etc., should also be practised. Tools like Jira or Trello can organize tasks and monitor their progress. They should also have frequent meetings with the client or the product owner to ensure they are on the same page and address any issues or changes. Anti-Pattern #2: Poor communication and collaboration Another essential factor for a successful engineering team is effective communication and collaboration. Without them, the team may suffer from misunderstandings, conflicts, duplication of work, missed deadlines, and low-quality outputs. To avoid this, the team should: - Establish clear roles and responsibilities for each team member and respect their autonomy and expertise. - Use a common language and terminology to avoid confusion and ambiguity. - Use a variety of communication channels and tools to share information, ideas, opinions, and feedback in a timely and transparent manner. - Hold regular meetings to discuss the project status, issues, challenges, and solutions. - Foster a culture of trust, respect, and support among the team members. For example, a team working on a mobile application for a social media platform should clearly understand who is in charge of what aspect of the project and what their expectations and deliverables are. They should also use a consistent naming convention for their variables, functions, classes, etc. Tools like Slack or MS Teams can be used to communicate with each other asynchronously or synchronously. Daily stand-ups or scrums can also be done regularly to share the team members' updates, challenges, and plans. Encouragement to ask questions, give feedback, and offer help to each other should also be regularly shown. Anti-Pattern #3: Lack of testing and quality assurance Another crucial factor for a successful engineering team is to have rigorous testing and quality assurance. Without them, the team may deliver software solutions that are buggy, insecure, unreliable, or incompatible with the customers' requirements or expectations. To avoid this, the team should: - Adopt a test-driven development (TDD) approach that involves writing tests before writing code. - Use a continuous integration (CI) tool that automatically builds and tests the code whenever a change occurs. - Use a continuous delivery (CD) tool that automatically deploys the code to a staging or production environment after passing the tests. - Use a code review tool that allows the team members to review each other's code and provide constructive feedback. - Use a bug-tracking tool that allows the team members to report, track, and resolve any defects or errors in the code. For example, a team working on an API for a banking system should have a comprehensive suite of unit tests, integration tests, and end-to-end tests that cover all the possible scenarios and edge cases. Various tools can be made use of in the said scenario: - Jenkins/GitHub Actions - to automate their build and test processes - Heroku/AWS - to deploy their code to different environments and ensure it works as expected - GitHub/Bitbucket - to review each other's code and suggest improvements or fixes - Jira/Bugzilla - to manage their bug reports and resolutions Anti-Pattern #4: Resistance to change and learning Another important factor for a successful engineering team is the willingness to change and learn. Without them, the team may become stagnant, outdated, or irrelevant in a fast-paced and competitive industry. To avoid this, the team should: - Embrace agile methodologies that promote iterative development cycles, frequent feedback loops, and adaptive planning. - Experiment with new technologies, tools, frameworks, or methodologies that can improve the efficiency or quality of their work. - Seek customer, stakeholder, peer, or mentor feedback on improving their skills or performance. - Invest in continuous learning opportunities such as online courses, workshops, conferences, books, blogs, podcasts, etc. For example, a team working on an AI model for image recognition should follow an agile approach that allows them to deliver incremental value and respond to changing requirements or feedback. In addition, it is recommended that they explore novel techniques or libraries that can improve the precision or speed of their model. They should also solicit feedback from experts or users on how to refine their model's performance or functionality and dedicate time to learn new skills or concepts that can help them grow as engineers. Engineering teams are vital for delivering successful software solutions. However, they may face challenges or difficulties due to bad practices, affecting their performance or results. Engineering teams can improve their productivity, quality, and satisfaction by identifying these bad practices and applying some of the suggested solutions above. Hope you enjoyed this read. Subscribe for more!
OPCFW_CODE
Using Sql Server 7 Web Assistant to Improve performance of Asp PagesBy Venkatraman Ambethkar The SQL Server 7 Web Assistant feature can be used to generate html pages which can be directly published on the web. There is a wizard to do this task in Sql server 7. The advantages are: 1. No need to open connection with the database for each and every request. 2. No need for any Asp code itself.(No ado object creations,looping through the recordset etc). 3. Automatically updates the content that is displayed when the underlying table's data changes. Let's take the common problem of generating dynamic combo boxes and use the database and the jobs table for our example. Imagine that we are displaying job_desc columns in a combo box. The following steps can be taken to generate a snapshot of the data in the pubs table and displaying our Step1: Creating the template file A template file is one which is nothing but a text file where we specify the format the SQL Server Web Assistant should follow while generating the HTML file. Open your text editor and type in the following few lines (or just copy and paste this): The HTML code between <%enddetail%> are the markers. The portion within these markers are repeated for each row of the recordset in the output file generated by the web assistant. The first <%insert_data_here%> marks where the first field of the recordset will be inserted and so on. Save the file as template.tpl in your virtual directory. Not necessarily in virtual directory. It can be in any directory accessible to SQL server. Step2. Creating the Job using the Sql Server 7 Web Assistant Wizard Open your SQL Server 7 Enterprise Manager, expanding your server and locating "Web Publishing" under Management. Right click on that you should get, New Web Assistant Job.... Select it, now the wizard walks you through the rest of the process. The following steps will serve as a guide... (View a screenshot of the SQL Server 7 Web Assistant Wizard.) 1. Click Next on the opening window. 2. Select the database which you want to use. Select pubs for our example.(click next) 3. Give a name for the job. (This is the name of the task through which the sql server is going to manage the whole process). And by default the "Data from the tables and columns that I select" has been selected. Leave it as it is. Click next jobs table from the drop down. And select Job_desc from the left hand list. We are going to use job_id in the value of the option element which you can find out from the Template file which we have created earlier. Click next. 5. "All of the rows" option has been selected by default. Leave it as it is. Click next. 6. In the "Schedule the Web Assistant Job" select "When the SQL server data changes". This also selects an option in the bottom as "Generate a web page when the wizard is completed". Leave it as it is. Click next. 7. In the "Monitor a Table and Columns". Select the jobs table and add which we are displaying the final drop down box. This makes the SQL Server to create insert, update and delete triggers on the Jobs table which will fire the process of generating the HTML file only when the values in these fields changes. If unencrypted triggers are already existing on the jobs table, these triggers will get appended at the the end. If there is an encrypted trigger, the wizard will fail. Click next. 8. In the "Publish the web page" window, browse to the directory which has been configured as virtual directory. And type in a file name in which the output will be written by SQL Server. Let's name it output.html. Click next. 9. In the "Format the Web Page" window, check "No,use the template file from" option and Browse to give the path where we have saved our template.tpl. Leave the "Use character set" drop down as Unicode(UTL-8). Click next. 10. In the "Limit Rows" window, leave the existing default options "No, return all rows of data" and "No,put all data in one scrolling page" as they are. Click next. 11. In this final window, click "Write Transact Sql to file..." to save the stored procedure which the Web Assistant has generated based on the inputs given by you in the previous windows. Running this query against the database creates the job in one go. This is useful for creating the job from an application like ASP. 12. Click Finish. You should get the message box saying, the job is created successfully. In Part 2 we'll continue our discussion on using the SQL Server 7 Web Assistant...
OPCFW_CODE
Installation issue "pip install cx_Oracle-7.1.3-cp37-cp37m-win_amd64" I want to install the cx_Oracle package so that i can have a connection between my Oracle XE 18C Database via SQL Developer and Python via Spyder so that I can directly move my bulky data from .csv file to oracle table in database. "Only i want is to install the cx_Oracle package for Python 3.7 so that i can get Oracle Database connected with Python. How soever I know the code to import the cx_Oracle package and establishing connection in Python for database". Installing of "pip install cx_Oracle-7.1.3-cp37-cp37m-win_amd64" is creating an error. I have installed the pip package and tried installing the pip package it got installed, using the below command; "pip install cx_Oracle" Then i downloaded the cx_Oracle(cx_Oracle-7.1.3-cp37-cp37m-win_amd64.whl) and executed below command in CMD using admin mode, which promts me error; "pip install cx_Oracle-7.1.3-cp37-cp37m-win_amd64" Command 1: "pip install cx_Oracle" Command 2: "pip install cx_Oracle-7.1.3-cp37-cp37m-win_amd64" The error is from cmd so unable to paste it here but i can have jpg file for the same. You didn't include the error or the jpg Error is in jpeg file which i am unable to upload could u please help me to do the same I don't think oracle libraries are in pypi (default). You may need to download their tar ball and install it as pip install xyz.tar The same I am doing but it is giving me an error in cmd I am unable to paste that error it has to be manually written Oracle libraries cannot be installed with pip. Since your data is in a csv file, you should use Oracle's SQL*Loader utility (which is available in Instant Client) or external tables. This will be more efficient than using Python. SQL*Loader documentation is https://docs.oracle.com/en/database/oracle/oracle-database/18/sutil/oracle-sql-loader.html#GUID-8D037494-07FA-4226-B507-E1B2ED10C144 If you want Python for something else, then check the cx_Oracle installation instructions. I only want to move csv excel file data to oracle database table using python I understand the desire to work in a familiar language. However, unless you need to use Python to transform the data, its worth thinking about SQL*Loader or external tables. These will load big sets of data faster than Python.
STACK_EXCHANGE
Merge sort is another comparison based sorting algorithm. Also, it is a divide and conquer algorithm, which means the problem is split into multiple simplest problems and the final solution is represented by putting together all the solutions. How it works: The algorithm divides the unsorted list into two sub-lists of about half the size. Then sort each sub-list recursively by re-applying the merge sort and then merge the two sub-lists into one sorted list. Having the following list, let's try to use merge sort to arrange the numbers from lowest to greatest: Unsorted list: 50, 81, 56, 32, 44, 17, 99 Divide the list in two: the first list is 50, 81, 56, 32 and the second is 44, 17, 99 . Divide again the first list in two which results: 50, 81 and 56, 32. Divide one last time, and will result in the elements of 50 and 81. The element of 50 is just one, so you could say it is already sorted. 81 is the same one element so it's already sorted. Now, It is time to merge the elements together, 50 with 81, and it is the proper order. The other small list, 56, 32 is divided in two, each with only one element. Then, the elements are merged together, but the proper order is 32, 56 so these two elements are ordered. Next, all these 4 elements are brought together to be merged: 50, 81 and 32, 56. At first, 50 is compare to 32 and is greater so in the next list 32 is the first element: 32 * * *. Then 50 is again compared to 56 and is smaller, so the next element is 50: 32 50 * *. The next element is 81, which is compared to 56, and being greater, 56 comes before, 32 50 56 *, and the last element is 81, so the list sorted out is 32 50 56 81. We do the same thing or the other list, 44, 17, 99, and after merging the sorted list will be: 17, 44, 99. The final two sub-lists are merged in the same way: 32 is compared to 17, so the latter comes first: 17 * * * * * *. Next, 32 is compared to 44, and is smaller so it ends up looking like this : 17 32 * * * * *. This continues, and in the end the list will be sorted. The MergeSort procedure splits recursively the lists into two smaller sub-lists. The merge procedure is putting back together the sub-lists, and at the same time it sorts the lists in proper order, just like in the example from above. Merge sort guarantees O(n*log(n)) complexity because it always splits the work in half. In order to understand how we derive this time complexity for merge sort, consider the two factors involved: the number of recursive calls, and the time taken to merge each list together.. Merge sort is a stable algorithm that performs even faster than heap sort on large data sets. Also, due to use of divide-and-conquer method merge sort parallelizes well. The only drawback could be the use of recursion, which could give some restrictions to limited memory machines.
OPCFW_CODE
module Temporality # = Temporal associations # # This module overrides +ActiveRecord::Base.belongs_to+ with the ability to # specifiy temporality options on the association. # # == Inverse associations # # It is necessary that ActiveRecord knows the inverse association on the # +has_many+ side. If it isn't inferred automatically you must specify it # using the +:inverse_of+ option on the +has_many+ declaration. # # @todo Use class-inheritable instance variables # module Associations # The default temporality options DEFAULTS = { inclusion: true, completeness: false, prevent_overlap: false, auto_close: false }.freeze def belongs_to(*args, &block) @temporality ||= {} assoc_name = args.first if args.last.is_a?(Hash) if opts = args.last.delete(:temporality) opts.keys.each do |key| unless DEFAULTS.keys.include?(key) raise "Unknown option '#{key}', valid options are #{DEFAULTS.keys.map(&:to_s).join(', ')}" end end @temporality[assoc_name] = with_implied_options(DEFAULTS.merge(opts)) end end super(*args, &block) end private # # Sets options implied by other options as follows: # # - +:auto_close+ implies +:completeness+ # - +:completeness+ implies +:prevent_overlap+ # - +:completeness+ implies +:inclusion+ # # @param opts [Hash] The options hash # @return [Hash] The options with implied options set # def with_implied_options(opts) res = opts.dup res[:completeness] ||= res[:auto_close] res[:prevent_overlap] ||= res[:completeness] res[:inclusion] ||= res[:completeness] res end end end
STACK_EDU
Georeferencing a whole heap of raster imagery I have a whole heap of satelite imagery coming in that is not georeferenced. They have a grid overlaid so I do know the bounding box/coordinates of the images (and projection). I just need them georeferenced for use in ArcGIS. I have manually done a few by referencing to a point dataset I traced over the top - and it works great. I just need to automate this process. Each image is exactly the same, so the same points overlaid will always match up (even if it doesnt, the application of this does not need to be accurate). Basically I need to apply the 'link table' information from the example I've already done, and apply it to all the images... Possible in python/model builder? I'm sure there's more ways of doing this, but we use GDAL's utility program gdal_translate to georeference our PNG images via script on a Linux machine. So, I first retrieved the georeference info from the original data (it's GRIB) used to make the images. Then we set up a script (Linux machine) with that information and used it in gdal_translate's options to georeference all the PNG images in a directory. Works pretty quick for our purposes. I don't have firsthand experience but I did catch part of the presentation below at a GIS conference in April. Perhaps contact the presenters, might help the brainstorm brew. Or perhaps stir up more ideas here?? Abstract In Kansas, beginning in the 1850s, the U.S. General Land Office (GLO) commissioned teams of surveyors to conduct transect surveys along all section lines in the state. For each township a plat map was produced that showed forest cover, streams, trails, and other significant features on the landscape, along with corresponding survey notes of additional feature information. This talk will outline the methods that are being used to georeference and digitize forest cover for over 2000 township survey maps for Kansas. In particular, we will focus our presentation on innovative automation procedures that were developed using eCognition, ArcGIS, Python, and MATLAB. This work is made possible through funding from the Kansas Department of Wildlife and Parks - US Fish and Wildlife Service, the State of Kansas GIS Policy Board, KansasView/AmericaView, and the Kansas Biological Survey. Planned and potential applications of these data will also be presented. 1) Integrating GEOBIA and GIS for Automated Georeferencing of 1850s General Land Office Survey Maps (30 Min) Kevin Dobbs Kansas Biological Survey Lawrence, KS Other Presenters: Ryan Surface, KARS; Stephen Egbert, KARS; William Busby, KARS; View Abstract http://www.magicgis.org/magic/symposiums/2012/view_abstract.cfm?pres_id=380 You can use the GDAL tools on Windows to accomplish this with a tiny bit of scripting. You need gdal_translate to add the GCPs to the image and the gdalwarp utility to convert it to a georefererenced image using those GCPs. You can download the OSGeo installer for Windows http://trac.osgeo.org/osgeo4w/ which will give you a linux-like shell (MSYS) with all the GDAL tools.
STACK_EXCHANGE
Fio was originally written to save me the hassle of writing special test case programs when I wanted to test a specific workload, either for performance reasons or to find/reproduce a bug. The process of writing such a test app can be tiresome, especially if you have to do it often. Hence I needed a tool that would be able to simulate a given I/O workload without resorting to writing a tailored test case again and again. A test work load is difficult to define, though. There can be any number of processes or threads involved, and they can each be using their own way of generating I/O. You could have someone dirtying large amounts of memory in an memory mapped file, or maybe several threads issuing reads using asynchronous I/O. fio needed to be flexible enough to simulate both of these cases, and many more. Fio spawns a number of threads or processes doing a particular type of I/O action as specified by the user. fio takes a number of global parameters, each inherited by the thread unless otherwise parameters given to them overriding that setting is given. The typical use of fio is to write a job file matching the I/O load one wants to simulate. Running fio is normally the easiest part - you just give it the job file (or job files) as parameters: fio [options] [jobfile] ... and it will start doing what the jobfile tells it to do. You can give more than one job file on the command line, fio will serialize the running of those files. Internally that is the same as using the stonewall parameter described in the parameter section. If the job file contains only one job, you may as well just give the parameters on the command line. The command line parameters are identical to the job parameters, with a few extra that control global parameters. For example, for the job file parameter iodepth=2, the mirror command line option would be --iodepth 2 or --iodepth=2. You can also use the command line for giving more than one job entry. For each --name option that fio sees, it will start a new job with that name. Command line entries following a --name entry will apply to that job, until there are no more entries or a new --name entry is seen. This is similar to the job file options, where each option applies to the current job until a new job entry is seen. fio does not need to run as root, except if the files or devices specified in the job section requires that. Some other options may also be restricted, such as memory locking, I/O scheduler switching, and decreasing the nice value. If jobfile is specified as -, the job file will be read from standard input. Interpreting the output Some important metrics within the output. Bandwidth statistics based on samples. Same names as the xlat stats, but also includes the number of samples taken (samples) and an approximate percentage of total aggregate bandwidth this thread received in its group (per). This last value is only really useful if the threads in this group are on the same disk, since they are then competing for disk access. IOPS statistics based on samples. Same names as bw. Test disk device Create or edit fio.conf.disk configuration file. fio fio.conf.disk to test disk performance. Test disk file Create or edit fio.conf.file configuration file. fio fio.conf.file to test disk performace. you need to specify size= write-512k: you need to specify size= size in the configuration file to fix that issue.
OPCFW_CODE
Computer software, hardware and software development tools are always evolving. It is important to become certified in the tools of your trade. These certifications will help you get a new job, move up the ladder and instill confidence in your colleagues. If you are working towards becoming a software developer, getting certified in Microsoft .NET is a good start. Many organizations use Microsoft software and Microsoft offers many certification exams to prove your knowledge and skills. So, you are interested in software development and want to become certified in Microsoft .NET? How Do You Get Microsoft .NET Certification? To become certified in Microsoft .NET there are a few certification exams you can take to prove your proficiency as a Microsoft Technology Associate (MTA). The MTA certifications show proficiency in three areas, IT infrastructure, databases and software development. For software development certifications, you can pass the following MTA developer certification exams that include software development fundamentals, HTML5 APP development fundamentals and mobility and device fundamentals. Software Development Fundamentals This certification focuses on the fundamental knowledge in software development. It will test your knowledge in C# and Microsoft Visual Basic .NET. The skills measured in this exam include understanding core programming, object-oriented programming, general software development, web applications, desktop applications and databases. This certification is the first step in becoming a certified MTA developer. HTML5 APP Development Fundamentals Mobility and Device Fundamentals This certification focuses on the fundamental knowledge of Windows devices and mobility. It includes knowledge of active directory, antimalware products, firewalls, network topologies and devices and network ports. The skills measured in this certification exam include understanding device configuration, data access and management, device security, cloud servers, and enterprise mobility. How Do You Prepare for the Microsoft .NET Certification Exams? The best way to prepare for the Microsoft .NET certification exams is to take a continuing education prep course. There are many benefits to an exam prep course. They include comprehensive learning, an opportunity to ask questions, industry experienced instructors and the ability to network with classmates. Benefit #1: Comprehensive Learning Benefit #2: Opportunity to Ask Questions The continuing education courses are a great time to ask all the questions you have about programming and web development in Microsoft .NET. As you attend the 198 hours of lecture and hands-on learning, you have the opportunity to ask any questions that you may have. The industry experienced instructors will guide you through the fundamentals and prepare you for the certification exams. Benefit #3: Industry Experienced Instructors Our instructors have worked in web development and programming and knew what you need to study to pass the Microsoft .NET certification exams. With these preparation courses, you have the ability to receive one-on-one instruction, so you learn what you need and don’t fall behind. The instructors understand what will help you master the certification exams and prepare you for your first day at your new job as a software developer. Benefit #4: Network with Classmates You will meet classmates of all industries, disciplines, and experience level. And your fellow classmates will be preparing for the same certification exam. They will want to come together and create study groups that can help everyone study for the Microsoft .NET certification exam. They will also be a great resource to network with when you start your new career. What Does the Microsoft .NET Certification Prep Course Teach? Programming fundamentals include programming logic, defining, and using variables, performing looping and branching, developing user interfaces, capturing and validating user input, storing data, and creating well-structured applications. These fundamentals also include C# program structure, language syntax, and implementing programming details. The courses in web development focus on .NET framework, coding to enhance the performance and scalability of a website, designing and developing services that access local and remote data, and developing and deploying services to hybrid environments for both on-premises servers and Windows Azure. So, you have your sites set on becoming a software developer and you want to prove your knowledge and skills in programming and web development. Becoming a Microsoft Technology Associate is a great way to prove your abilities, get a new career in software development or climb the ladder in your current organization. Take the time to prepare for the Microsoft .NET certification exam. Knowing all the content that will be on the test will boost your confidence and prepare you to pass this important certification exam. Want to Learn More? Florida Technical College was founded in 1982 to provide post-secondary training in specialized business fields. Florida Technical College programs are exciting and dynamic, evolving over the years to meet the needs of students and the job marketplace. Ready to move from a job to a career? Florida Technical College Continuing Education is here to help. Contact us to learn more about the Microsoft .NET certification exam preparation courses at Florida Technical College. Leave a Reply
OPCFW_CODE
Add an optional cache Performance can be poor in some circumstances. One of the reasons is that every call needs to determine the current state of the flash. This includes: What are the newest and oldest pages Where's the oldest data Where's the newest data Where's the newest data of a particular key This could be sped up quite a bit if some information was kept in a cache in ram Do you have an API in mind that would support caching? I was thinking to create StorageController struct that holds the cache. push, peek, etc would then be implemented on this struct, taking &mut self. The struct could also hold the mutable reference to the flash and the flash range so they don't need to be passed to every function. Would these changes make sense to you? And would you be interested in a PR? It would be a pretty big change to the API so I wanted to check before putting in too much work. Hey, thanks for showing interest! Yeah I'd be open for a PR. But, I am going to be strict about it! So be warned, I won't accept if I don't like it. As for the design, I really like the freestanding functions right now. A recent addition for peek_many and pop_many kinda goes against the statelessness of the code. Up until then, all state existed in the flash only. But that's still mostly the case and I want to stick to that. So what does that mean for cache? I want the cache to be optional. I want the freestanding functions to remain. All state at any point must be in flash (or recoverable from flash). This isn't strictly true right now while in an operation, but it is true between operations. This is something I'm working on improving so future task cancellation or random shutoff. This has some consequences. I'd like to see extra functions so you have like pop and cached_pop. cached_pop can then be the actual implementation which pop calls. (Though this can only really happen if the cache is (optionally) small). At any point, even within operations, it should be ok to wipe the entire cache. Writes are written to flash directly. The cache can then be updated to reflect the new state. So how do we add cache that isn't a burden to people who don't want it (e.g. for ram size reasons)? We make a trait. The trait can be used to query information from the flash. For example (feel free to think of better names): trait StateQuery { fn youngest_page(&self, flash: ..., flash_range: ..., ...) -> Result<..., ...>; fn last_item_of_page(&self, flash: ..., flash_range: ..., page_index: usize) -> Result<(ItemHeader, u32), ...>; } This trait would have functions for common/slow high-level queries. An implementation for this trait can be made that doesn't do any caching an just queries the flash, just like it does right now. An additional implementation can be made on top of direct-flash implementation that tries to actually cache things. Instead of reading the flash it intercepts the query and it returns some of its cached state if it has any. How we should inform the cache of a state change is something I don't know. Maybe that should flow from what feels good from the required refactor. @avsaase does this all make sense a bit? Thanks for the detailed description of of you want this to work. I have a few questions: What are your thoughts on the expected performance vs RAM usage tradeoff when adding caching? Maybe I'm overlooking something because I don't know that code base in detail but the amount of data that needs to be cached seems minimal whereas the time spent to search for pages with a given state can be substantial. I guess this question boils down to, why should caching be optional? If you want to keep caching optional then that's totally fine of course. How about putting the optional caching behind a feature instead of using traits? This would reduce the amount of code that needs to be changed and reduces compile time. It wouldn't be possible both use and not use caching inn the same code base but I feel that would be a very nice use case. How we should inform the cache of a state change is something I don't know. Maybe that should flow from what feels good from the required refactor. I'm not sure what you mean by this. Do you want the caching system to be resilient to changes to the flash outside of this crate's flash interactions? I need some time to get familiar with the code base before I can commit to making a PR but I think it would be an interesting challenge. The RAM requirements really depends on how much we want to cache. Maybe that should be up to the user to decide. For example, best performance could be reached if the entire flash content was replicated in ram together with a ledger of all item locations. That's not really realistic I would say, but just the ledger of the item locations is realistic. But that might be too much for some people. Say you've got a 32KB of flash and an average item length of 20 bytes. Say we want to store the address of each item in a u32, then that'd give us ~6.5kb of RAM required to cache it. Not everyone has that. So I like building systems that are extendible and IMO that can be reached well with using the sort of trait I proposed. This isn't as good if it's behind a feature flag. I guess the compile time will be a little bit longer, but if the user is only using 1 type of caching, then there's only one instantiation of all the functions. So that should be fine. If you rely too much on it, features will be very annoying. A trait will make sure all functionality always works. There's no path that's less well tested. Also, I want to maintain the relative simplicity of the crate. Required cache makes everything so much more complex because all of the extra state that then has to be kept, even if it's just a little bit. Later there's gonna be a bug somewhere and I will want to be able to get to the bottom of it as fast as possible. If cache is disabled and the bug is still there, then I know the bug stems from the state on the flash itself or some logic around it. If the bug is only there if cache is enabled, then I'll know I'll have to look at the cache. TLDR I want to use the type system for extensibility and maintainability. I see a future where I want to provide multiple amounts of caching, including none. I added this to my previous post while you were typing your reply so you probably missed it. The requirement to keep the freestanding functions makes is necessary to use some kind of global mutable state for the cache. Do you have a preference for how to handle this in this crate? Ah thanks, I did miss that yes. The user keeps the cache. We do not keep it for them. This means that, just like the flash, the user gives a reference to the cache in the function call. So we keep no global state. That wouldn't work anyways, because the crate might be used for multiple regions of flash or even in different flash chips at the same time. The global state may not necessarily be global-as-in-linking global, but something the application keeps (per region, if it is using some kind of partitioning). The underlying question to me is how to deal with cache invalidation (yeah, one of those) in the face of having the freestanding functions. (How) Can we forbid using the freestanding data structures on flash on which a cache is used? Do we need to do this in the type system? (We probably can't, because right now the freestanding functions take a NorFlash and Range, and nothing keeps anyone from mixing those with the wrong kind of storage.) Can and should we state that if there is any cache, it needs to be passed in to all of the functions that interact with some reason, under the same penalties (arbitrary data or even panics?) you get for mixing a map with a queue in the same flash? Doing some enforcing on the API level may be a good idea, but is likely outside of this issue's scope. (Maybe even outside of this crate's scope, as it'd require a split_at_mut style partitioning from embedded_storage). It's certainly doable for a cache to always perform some kind of cache validation (probably checking the checksum of the oldest page, and whether anything has been written on the partially open one), but that's a trade-off I'd rather not make (and instead demand that the user be consistent). @chrysn Good questions! I don't think we can enforce it in the typesystem. Mainly for these reasons: We can only enforce things after initialization. That means that if the microcontroller reboots, it can do whatever because all previous state is gone. We don't want to own the flash. This is because the user might want to use different flash regions for different purposes and they might even want to make a backup of the flash managed by this crate. If we then own the flash, instead the user will create some shared flash thing which undermines the reason why we want to own the flash anyways. So far the current way has been fine IMO. It's pretty clear you shouldn't mix different datastructures in the same location. But enforcing that cache usage must be consistent is less clear. In principle the user should be able to go from cache -> no cache and no cache -> new empty cache. What we cannot realistically support is cache -> new/no/different cache -> old cache. I don't think we can stop that the API level. Having the freestanding functions does make it easier to do wrong, I do agree with that, but only a little bit. Reflecting on this problem I think the best course of action is to simply document it well and require the user to always specify his cache. That means no separate peek and cached_peek. Just peek where you can pass in a NoCache instance. By forcing the user to pass a cache variable every time you force him to think about it.
GITHUB_ARCHIVE
Add experimental OPAQUE support Summary add non-standard Key Exchange Request / Response / User Authentication frames convey OPAQUE protocol messages in these frames perform mutual authentication with OPAQUE and derive a session key Improve testing pipeline Details As an alternative to tunneling RaSTA over TLS, I added optional support for a key exchange that uses the OPAQUE augmented Password-Authenticated Key Exchange. This allows server and client to derive fresh session keys for the used Hash functions in a user-specified interval and on initial connection. Also, it guarantees authentication (in the sense that the client has access to the password, and the server either has password access or a pre-computed user record that contains the blinded password). I.e., the server can authenticate the client and the client can authenticate the server, possibly without the server knowing the password (thus, the password cannot be stolen from the server). All OPAQUE-specific changes are enabled or disabled using compiler defines. Also, I added stages in the GitHub Actions pipeline that automatically test "normal" RaSTA connections, DTLS RaSTA connections and OPAQUE-enabled RaSTA connections. This includes so many good contributions and refactorings, in addition to the actual new proposal. Thank you! Before merging, I have a few concerns In contrast to our TCP/TLS work, you are changing the RaSTA protocol itself (deviating from the DIN). Can we somehow separate the 'DIN' implementations from the 'experimental' implementations even more (i.e. in terms of separate code files or even source directories)? What are the expected next steps/lifetime of the 'experiment' with regards to the rest of the repo? Is the OPAQUE library dependency optional, i.e. only required if the experiment is enabled? Is this feasible or asking too much? Sometimes, there are entire files showing up as diff. Is this because of (previously) wrong line endings? This includes so many good contributions and refactorings, in addition to the actual new proposal. Thank you! Before merging, I have a few concerns * In contrast to our TCP/TLS work, you are changing the RaSTA protocol itself (deviating from the DIN). Can we somehow separate the 'DIN' implementations from the 'experimental' implementations even more (i.e. in terms of separate code files or even source directories)? What are the expected next steps/lifetime of the 'experiment' with regards to the rest of the repo? * Is the OPAQUE library dependency optional, i.e. only required if the experiment is enabled? Is this feasible or asking too much? Regarding question 1: I have tried to build the OPAQUE extensions in a way that they are easy to remove from the code base in the (likely) case that the VDE does not want to pursue this further. I was hoping to demonstrate the feasibility of the aPAKE extension in our final project presentation and that there is enough interest in the idea to justify keeping it in the repository. I can separate them even more from the remaining code if you want, so that it is easy to remove the changes if required. Regarding question 2: libopaque (and libsodium) are only required for the OPAQUE extensions, if OPAQUE is disabled, the libraries are not downloaded and linked. I have added one more step in the GitHub action that builds and tests the protocol without libopaque and libsodium. Sometimes, there are entire files showing up as diff. Is this because of (previously) wrong line endings? It is possible CLion changed the line endings to line-feed automatically.
GITHUB_ARCHIVE
A few snakes do not conform to these categories. Atractaspis is solenoglyphous but the fangs swing out sideways, allowing it to strike without opening its mouth, perhaps allowing it to hunt in small tunnels. Scolecophidia (blind burrowing snakes) typically have few teeth, often only in the upper jaw or lower jaw. Informal or popular terminology Yes, snakes have spines. Your spine houses your spinal cord. The spinal cord transports messages from your brain out the nerves on each side of your vertebras (24 bones in your spine). Of course. Snakes have a spine, and some say that there have been signs of them having bone structures that suggest that they have had legs ^_^ (of course we know in the bible where the snake was cursed to slide on his belly and eat the dusts of the earth) =^-^= There, hope i helped! ~*WinglessAngel*~ Do snakes really have bones? Other fun facts about snakes and interesting conspiricies! Did you know that snakes can grow to up to 30 feet long. Did you know that snakes do not have eyelids. Did you know that snakes do have bones Does a Snake Have a Backbone? A snake does have a backbone. In fact, the majority of bones in a snake's body are vertebrae, the cylindrical bony segments that make up the backbone or spine. Snakes can have anywhere from 130 to 500 vertebrae, depending on species and length. The vertebrae have interlocking projections on the front and back ends ... Do Snakes Have Spines Stinging Insects And Poisonous Spiders Of Alabama Stinging Insects and Poisonous Spiders of Alabama TWO TO THREE MILLION people in the United States are spines which penetrate the skin upon contact. The caterpillars feed upon many trees, ... A hemipenis (plural hemipenes) is one of a pair of intromittent organs of male squamates (snakes, lizards and worm lizards). Hemipenes are usually held inverted within the body, and are everted for reproduction via erectile tissue, much like that in the human penis.They come in a variety of shapes, depending on species, with ornamentation, such as spines or hooks. Snakes are reptiles, and all reptiles have spines. Slow worms (which are a kind of legless lizard) also slither and have spines. In fact, all vertebrates (which include reptiles, mammals, birds, and fish) have spines, by definition. BODY OF A SNAKE. In case you were wondering (cause they are soooo flexible), snakes actually do have bones. Animals with bones are know as vertebrates -- snakes are vertebrates. A snake's backbone is made up of many vertebrae attached to ribs. Humans have approximately 33 vertebrae and 24 ribs. Why some male bats have spines on their penises. ... The question is, why do some bats have penile spines? What are they for, and why do their sizes and shapes vary so much? Based on what we know ...
OPCFW_CODE
Resizing QT's QTextEdit to Match Text Height: maximumViewportSize() I am trying to use a QTextEdit widget inside of a form containing several QT widgets. The form itself sits inside a QScrollArea that is the central widget for a window. My intent is that any necessary scrolling will take place in the main QScrollArea (rather than inside any widgets), and any widgets inside will automatically resize their height to hold their contents. I have tried to implement the automatic resizing of height with a QTextEdit, but have run into an odd issue. I created a sub-class of QTextEdit and reimplemented sizeHint() like this: QSize OperationEditor::sizeHint() const { QSize sizehint = QTextBrowser::sizeHint(); sizehint.setHeight(this->fitted_height); return sizehint; } this->fitted_height is kept up-to-date via this slot that is wired to the QTextEdit's "contentsChanged()" signal: void OperationEditor::fitHeightToDocument() { this->document()->setTextWidth(this->viewport()->width()); QSize document_size(this->document()->size().toSize()); this->fitted_height = document_size.height(); this->updateGeometry(); } The size policy of the QTextEdit sub-class is: this->setSizePolicy(QSizePolicy::MinimumExpanding, QSizePolicy::Preferred); I took this approach after reading this post. Here is my problem: As the QTextEdit gradually resizes to fill the window, it stops getting larger and starts scrolling within the QTextEdit, no matter what height is returned from sizeHint(). If I initially have sizeHint() return some large constant number, then the QTextEdit is very big and is contained nicely within the outer QScrollArea, as one would expect. However, if sizeHint gradually adjusts the size of the QTextEdit rather than just making it really big to start, then it tops out when it fills the current window and starts scrolling instead of growing. I have traced this problem to be that, no matter what my sizeHint() returns, it will never resize the QTextEdit larger than the value returned from maximumViewportSize(), which is inherited from QAbstractScrollArea. Note that this is not the same number as viewport()->maximumSize(). I am unable to figure out how to set that value. Looking at QT's source code, maximumViewportSize() is returning "the size of the viewport as if the scroll bars had no valid scrolling range." This value is basically computed as the current size of the widget minus (2 * frameWidth + margins) plus any scrollbar widths/heights. This does not make a lot of sense to me, and it's not clear to me why that number would be used anywhere in a way that supercede's the sub-class's sizeHint() implementation. Also, it does seem odd that the single "frameWidth" integer is used in computing both the width and the height. Can anyone please shed some light on this? I suspect that my poor understanding of QT's layout engine is to blame here. Edit: after initially posting this, I had the idea to reimplement maximumViewportSize() to return the same thing as sizeHint(). Unfortunately, this did not work as I still have the same problem. I have solved this issue. There were 2 things that I had to do to get it to work: Walk up the widget hierarchy and make sure all the size policies made sense to ensure that if any child widget wanted to be big/small, then the parent widget would want to be the same thing. This is the main source of the fix. It turns out that since the QTextEdit is inside a QFrame that is the main widget in a QScrollArea, the QScrollArea has a constraint that it will not resize the internal widget unless the "widgetResizable" property is true. The documentation for that is here: http://doc.qt.io/qt-4.8/qscrollarea.html#widgetResizable-prop. The documentation was not clear to me until I played around with this setting and got it to work. From the docs, it seems that this property only deals with times where the main scroll area wants to resize a widget (i.e. from parent to child). It actually means that if the main widget in the scroll area wants to ever resize (i.e. child to parent), then this setting has to be set to true. So, the moral of the story is that the QTextEdit code was correct in overriding sizeHint, but the QScrollArea was ignoring the value returned from the main frame's sizeHint. Yay! It Works! Thanks for finding this one out. It will definitely help me later one. Aaron, could elaborate on your solution? I can't seem to access a QScrollArea associated with a QTextEdit. Can you explain, given a QTextEdit what you set the widgetResizable property on? i.e. how they're connected? QTextEdit actually derives from QAbstractScrollArea which doesn't have a setWidgetResizable property (I'm in Qt 4.7). Thanks this is really bothering me!!! @supertwang you might want to check my answer to a similar question here. I also had similar problem, but with QPlainTextEdit, so if anybody has problems with QPlainTextEdit - be aware, that it has different document layout manager and it reports document()->size() in number of lines, not in number of pixels. You have to get QFont from the widget, use QFontMetrics::lineSpacing() and multiply it by document()->size() to get height in pixels. You can ready about it in the general description of QPlainTextEdit class. Sounds promising, without an example is hard to see how to implement this. Because of Dave's question. @Dave did you ever managed to do it? I have the same problem right now. For me, it was just setting QScrollArea to setWidgetResizable(true);. Thanks so much! You may try setting minimumSize property of the QTextEdit to see if that force the layout to grow. I don't understand most of Qt's layout scheme but setting minimum and maximum size pretty much does what I want it to do. Well, most of the time anyways. Thanks for the answer, Stephen. I have tried setting every combination of minimumSize, maximumSize, and fixedSize (with the corresponding QSizePolicy's), but none have made any difference. agreed, in my case I found that I have to set both minimum and maximum sizes for it to properly work.
STACK_EXCHANGE
showDialog throws an exception when localizing an app to a language not found in MaterialLocalizationsDelegate Steps to Reproduce I'm using Flutter to write an app that must be localized in norwegian and english. I declared support for both languages using supportedLocales in my main app widget. At some point I want to display a dialog using showDialog(Context, Widget), but I noticed that in the implementation of showDialog there is a call to MaterialLocalizations.of(context).modalBarrierDismissLabel that is causing an exception (see below) and prevents the dialog from showing at all. By removing Locale('nb', 'NO'); from the list of supported locales, the app behaves as expected (the dialog appears) Logs I/flutter (22056): The getter 'modalBarrierDismissLabel' was called on null. I/flutter (22056): Receiver: null I/flutter (22056): Tried calling: modalBarrierDismissLabel Could this be solved by calling GlobalMaterialLocalizations.of(context).modalBarrierDismissLabel instead? Same error with Belarusan Language, locale be-BY Is there a way to override the call without waiting for full translation from Flutter? Flutter has added support for Norweigian since this issue was first opened. Support for the Belarusan Language is not available yet. There's now a somewhat more informative console warning (debug mode) when the app's supportedLocales names a locale that Flutter hasn't been localized for: I/flutter (14227): ════════ I/flutter (14227): Warning: This application's locale, be_BY, is not supported by all of its I/flutter (14227): localization delegates. I/flutter (14227): > A MaterialLocalizations delegate that supports the be_BY locale was not found. I/flutter (14227): See https://flutter.io/tutorials/internationalization/ for more I/flutter (14227): information about configuring an app's locale, supportedLocales, I/flutter (14227): and localizationsDelegates parameters. I/flutter (14227): ════════ In this case, probably the best thing to do is for the app to include its own support for the Belarusan Language. Currently there's no explanation in https://flutter.io/tutorials/internationalization/ about how to do that, so here we go. To add Material library support for a single new locale one must create a locale-specific GlobalMaterialLocalizations subclass that defines the approximately 65 localizations the Material library depends on. Additionally one must create a LocalizationsDelegate subclass that essentially just constructs an instance of the new GlobalMaterialLocalizations subclass. There's a complete example, less actual Belarusan translations, here: https://gist.github.com/HansMuller/978e8703c2c4253e3113ef468b6e7936. The locale-specific GlobalMaterialLocalizations subclass is called BeMaterialLocalizations and the LocalizationsDelegate subclass is _BeMaterialLocalizationsDelegate. The value of BeMaterialLocalizations.delegate is an instance of the delegate, and that's all that's needed by an app that uses these localizations. The delegate class includes basic date and number format localizations. All of the other localizations are defined by String valued property getters in BeMaterialLocalizations. The getters return "raw" Dart strings that have an r prefix, like r'About $applicationName', because the $variables are expanded by methods with parameters, like aboutListTileTitle(String applicationName). More information about the localization strings can be found in the flutter_localizations package's README file. I'm closing this because we've added a section to the i18n tutorial that covers the material in https://github.com/flutter/flutter/issues/15439#issuecomment-480462059, see https://github.com/flutter/website/pull/2603
GITHUB_ARCHIVE
R Programming Sample Projects We’re very proud to be part of The Family Network! We are looking for talented writers/moderators/architects/designers/authors/we’re looking for talented people/people to help us achieve the goal of being the best we can be. We would love to work on a project that was a result of you and your work. We hope to become the best we could be, or to be part for the future. We are looking for a person who can bring the craft of Design to the table. We are looking at a team who can help us make the most of what we’ve got. We’ve worked on an application that we’d like to be able to keep running in the future. We are hoping to be able say, “I need you to have a good design experience.” You will get to work with us once you have a good idea of what you like to do. Designers/Moderators/Architects We would love to be able bring the craft to the table by using our talented designers and architects. We‘ve worked on several projects over the past year including a project that we‘ve been Home on for several months. We would like to be a part of the next iteration of The Familynetwork. In this article, we will dive into the design process from a design perspective. Our team will follow the same process as we did for The Family Network. R Programming Expert Rating We“ve been building the first prototype for the project and will be working on a new prototype of the first prototype. We will begin to build up the prototype into a final design. Our team is looking to build up a prototype to meet with the designer/moderator/architecture. We”ll be working on the project in the next few months. Project Background The Family Network is being built by our team of designers, architects and mod_devs. The goal is to build a prototype of something that was created as a result of the design decisions of the family network. The Process We have worked with many different designs over the years. The team is looking for a creative writer and architect who can help them create the best design for The FamilyNetwork. We are seeking a person who will do the same. We have worked on projects over the last 15 years and we have an idea of what we hope to do. We work on our own projects and will return when we are ready. When We Get Started We will be working with a very busy and busy schedule. In order to get started and create a prototype, we will have to schedule a meeting. R Programming Coding Help Online Free If you would like to have a meeting with us, please visit the Facebook page. What to Expect We expect to work on the project on a regular basis. We‟ve been working with our design team for several years and have worked with them on many projects. Some of the projects we’re working on involve large scale design, such as the design of an A/V display, the design of a product that we use, the design for a TV, the creation of a prototype, the design process for the TV, the design and the design of the prototype. Who We Are The family network is a family of fiveR Programming Sample Projects for the Next Generation of Ecosystems Menu Tag Archives: R R. Programming Sample Projects For the Next Generation Of Ecosystems’ Project R is the programming language for R, the language of R programming. It is the language for programming R programs. It is used for R programming. R has a wide range of technologies and is capable of the programming of many different programming languages, including languages that are written in R. In R programming, R has many features, such as access to mappings, as well as the ability to use a mapping to access from many different programming language platforms. The R programming language is designed to be portable, and it is easy to use and very fast. We have a language called R. Programming has one language: R. Programming is the language used in programming R. Programming can be seen as a programming language for a wide range and many different programming platforms. R programming is a very powerful language, very fast, and it can be used for many different programming tasks. Often times, there are a number of programming languages that are used for programming R, but the R programming language has a few limitations. “R is a better language for programming than a standard C language, because programming R is easy and fast,” says Dr. J. DeNardo. R has two main developers: the first is Dr. DeNario. Dr. DeNaro is the second. He is the first R programmer, and he is also the third R programmer, from the top. DeNaro is a R programmer. R Programming Assignments Help With R Programming Homework One of the main advantages of R programming is that it is easy for developers to use and use. It is a very useful language for developers to understand. There are many R programming languages and many more R programming languages for developers. What is R Programming? R Programming is a programming model of programming R, especially R programming. The R programming language, as you can see in the Table of Contents, is a programming standard. For example, the R programming standard is a programming text, which is written in R, and the R programming text is a R program. A R program is a text that is written by a programmer under the name R. A programming text is R programs in R, a programming language. Programming language R is a programming tool that is used to translate and translate data in R programming into R. Programming language R has many benefits. It can be used to translate data in programming languages, and the language can be used as a text editor. To use R programming in a programming language, you need a programming language and a programming language to be used. And you need a mapping of R programming to R text. R Programming Homework Doer I have R programming in my home computer. I work in IT. I have several computers, which are R programming, but the most important part is programming R. I have one R programming language. I have R programming on a server computer. I have a R programming language on a server machine. But, R programming has many advantages. – R programming allows you to use R programs. – R programs are easy to use. Many R programming languages include a mapping of languages to R programming. This mapping is very important for R programming because it allows you to access R programming in many different R programming languages that canR Programming Sample Projects This project will collect and analyze what you learn in the program and then make it into a programming project. The project will be a series of exercises that you’ll do in order to improve and develop your skills in programming. This project is for your reference and can be viewed at [www. R Programming Coding Help Online Free theorizon.com/project/programming/program/programming-projects]. The Assignment The project will be organized around two main areas: the project structure and the course. The assignment, written by the instructor, will be the basis of your progress in programming. The instructor will work with you, his or her team and will have the following: the course design, which will prepare you for the course the project structure, which will provide you with a framework for the project the exercises the program, which will be a collection of exercises that will help you write your program. The course will be divided into parts, which will then be completed by the instructor in the course. This is a short project, that will help the instructor to work with you and your team. The project structure will be: The projects will consist of 10 exercises The exercises will be divided in two parts: a project structure a course structure The training will be organized into three parts: One is an optional class in which you will learn how to build your own specific skills in these exercises. An optional class is a group of 5 exercises that will be used to build the skills of the class. The class exercises will help you to build your skills in these classes. It is the goal of the project to identify and analyze the strengths and weaknesses of the various skills and develop them in the exercises. There are two major ways that you should approach the project: 1. Create a class that is easy to do 2. R Programming Beginner Homework Create a group that is easy for you to work with This is the only way that the instructor can help you to create a class. You can create a class, but you cannot create a group. Here are the exercises you should take to create a group: In the first assignment, you will learn to build your basic skills with the example of the example of a tool that you can use to get started with your program. You will learn how you can use a tool to get started using some basic tools. You will also learn how to use a tool that will be easy for you and will help you develop your skills. In this assignment, you are going to use two different tools and you are going back to the example of an example of many tools that you can learn. The first tool is the tool that I have already mentioned. The tool you have already mentioned is the tool you have made with the example. I will give you the tool, the tools, that I have made, that I am using. The tool I am using will be the tool that you have used to develop the program. The second tool is another tool that I am learning. The tool that I had made is the tool I am going to use. Once you have started building your program, you can see the exercises that you are going through in this lesson. Cheap R Programming These exercises are about how you can create a program and build it. A brief description of the exercises and the exercises that
OPCFW_CODE
Visual Studio Express 2013. Package Failed I have encountered the following error message after finishing Visual Studio Express 2013 installation And here's what I have found out in the log file [06D8:0C80][2014-12-03T18:24:39]i338: Acquiring package: aspnetmvc4vwd12tools_1014, payload: cabA3BBAEE7F3255814C4DE370C415391E5, download from: bits://go.microsoft.com/fwlink/?LinkId=427850&clcid=0x409 [06D8:15A4][2014-12-03T18:24:52]i000: MUX: ExecuteError: Package (webdeploy_x64_en_usmsi_902) failed: Error Message Id: 2738 ErrorMessage: The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2738. [0254:2660][2014-12-03T18:24:58]e000: Error 0x80070643: Failed to install MSI package. [0254:2660][2014-12-03T18:24:58]e000: Error 0x80070643: Failed to execute MSI package. [06D8:15A4][2014-12-03T18:24:58]e000: Error 0x80070643: Failed to configure per-machine MSI package. Did anyone who has met the same issue help me figure out? Thank you so much Let me know if my answer works for you and if you need more help Hi, faby. When trying to install Web Deploy 3.5 as a standalone application. I have got another error message: "The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2738". May you give me some suggestions about this? Thanks yes, could you provide me some information about your system? version of windows? 32 or 64 bit? I've updated my answer. Let me know if that Microsoft fix works for you Sure. Window 7, 64-bit version, Processor: Intel(R) Core(TM) i7-3610QM CPU @ 2.30GHz 2.30 GHz. Hope it helps. Thanks, faby :) Hi, faby. I did try Microsoft fix, but still it did not work Yeah. It was the same problem Did you try to reinstall the single component after running the Microsoft wizard? Yes. After running Microsoft fix. I have tried reinstalling, but things still did not work. I've updated the answer.. try that and let me know try installing it as a standalone program from here and try to reopen VisualStudio have a look in System Requirements section Supported Operating System Windows 7 Professional, Windows Server 2003 Service Pack 2, Windows Server 2008, Windows Server 2008 R2 SP1, Windows Server 2012, Windows Server 2012 R2, Windows Vista, Windows XP The supported platform is 64-bit for this download. The .NET 2.0 Framework SP1 or greater must be installed. if it doesn't work try this Microsoft fix. Run the exe that should solve some machine problems if all of this suggestions don't work you can try these steps Go to Start-->Run type cmd and press Enter type cd %systemroot%\system32 and then Enter type regsvr32 vbscript.dll and the enter next steps are because you have a 64bit version type cd %systemroot%\syswow64 and press Enter type regsvr32 vbscript.dll and press enter type exit Re run the installer of WebDeploy Source from this link Hi, faby. The same problem still occurred. try running Command as administrator. Click Start -> All Programs -> Accessories Right click “Command Promt” and click on “Run as administrator” which message did you get after typing regsvr32.exe vbscript.dll?
STACK_EXCHANGE
Issues Parsing JsonSchema into CSharp Classes I want to say that this is a great library and offers a lot of good features but I have run into some issues that I couldn't see documented anywhere else. My use case is that I have 9 schema files that are all connected one way or another. I have a central schema that links them all together and I am parsing that schema into CSharp classes using GenerateTypes and then outputing the types to separate CS files. I am using the latest nuget package. 1.) The creation of duplicate classes with a 2 after them. For instance I have a class called "MyClass" and then another one called "MyClass2". MyClass2 inherits from MyClass and offers nothing new to the class. Additionally everything else references MyClass so MyClass2 doesn't even have any uses. I believe this might be related to the way I structure my schemas { "definitions":{ "test":{ "title":"MyClass", "type":"object", "properties":{ "value":{ "type":"number" } } } }, "allOf": [ { "$ref":"#/definitions/test" } ] } I believe its creating one version with the definition and then another from the allOf. I use this method to clearly define my definitions and make it easier to read across multiple schema files. My understanding is that this is allowed in the JsonSchema standard. 2.) Following on the above another strange issue I ran into is the use of the following: "definitions":{ "test":{ "title":"MyClass", "type":"array", "items":{ "oneOf":[ { "type":"number" }, { "$ref":"#/definitions/test" } ] } } }, My understanding is that should allow me to create an array where any index of the array could be either a number or an MyClass array. Unlike tuple with item:[] this should allow for a list with multiple types. Unfortunately what actually happens is that first item of the oneOf is used and the second item is just forgotten. I understand the difficulty of creating a class to generalize the two types but at least doing something like List would be better than selecting just the first item. 3.) Titles are not preserved as class/type names. When referencing across files I noticed that title properties are not used in determining the class name. I found several issues mentioning title being used as a fallback on github but I believe it should be the first thing used especially if you end up with multiple places referencing the same item. I found that it would take a specific property name instead of the title. a.json "definitions":{ "test":{ "title":"MyClass", "properties":{ "value":{ "type":"number" } } } } b.json "definitions":{ "test2":{ "title":"MyOtherClass", "properties":{ "valueb":{ "$ref":"a.json#/definitions/test" } } } } c.json "definitions":{ "test3":{ "title":"MyOtherOtherClass", "properties":{ "valuec":{ "$ref":"a.json#/definitions/test" } } } } In the situation above MyClass or test would end up being named valuec as the class/type name as it was the last property to reference test. I think it might be better in this situation to have it be called MyClass so that there is a centralized type. 4.) The final one I have no idea what is going on. I have a schema that looks like this "definitions":{ "actionType":{ "title": "ActionType", "oneOf":[ { "type":"object", "properties":{ "operator":{ "type":"string" }, "operatorGroup":{ "type":"string" }, "scriptFile":{ "type":"string" } } }, { "type": "array", "items":{ "$ref":"#/definitions/function" } } ] }, "function":{ "title": "Function", "allOf":[ { "$ref":"types.schema.json#/definitions/baseType" }, { "properties":{ "name":{ "type":"string" }, "question":{ "type":"string" }, "ref":{ "$ref":"types.schema.json#/definitions/selector" }, "inputs":{ "type":"array", "minItems":1, "items":{ "oneOf": [ { "$ref":"types.schema.json#/definitions/selector" }, { "$ref":"types.schema.json#/definitions/value" } ] } }, "action":{ "$ref":"#/definitions/actionType" }, "output":{ "type":"array", "minItems":0, "items": { "$ref":"types.schema.json#/definitions/selector" } }, "options":{ "$ref":"#/definitions/functionOptions" }, "additional":{ "type":"array", "minItems":1, "items": { "$ref":"types.schema.json#/definitions/selector" } } }, "required":[ "name" ] } ] } } Action is not referenced by anywhere else. For whatever reason ActionType is never generated. Well actually an ActionType class is generated but its blank and contains no properties. Also Function references an Action2 class that doesn't even exist causing the code to not even compile. I am guessing this is an issue with understanding what ActionType even is since its either an array or an object. I think there is a way to make this kind of class but maybe this is just a limitation of NJsonSchema. I ended up not using this library and switching to something else since I only need to convert json schema to C#/Typescript @igloo15 what did you switch to?
GITHUB_ARCHIVE
Développer une application android pour le secteur de l'éducation (cours ; quiz ; forum ; ...) Le modèle de reférence est l'eppli suivante : Okpabac ([se connecter pour voir l'URL]) L'application sera devéloppée uniquement dans la version android et comporte un backend et une base de donnée. Des spécificatio... ...based chat application (Android application) with SocketIO. But we notice 2 big problems : 1. Excessive data consumption (for 10 minutes the app consum more than 30 Mo even on background) 2. The untimely disconnection of the socket. We are obliged to restart the application to continue the chat The architecture used is : -- Database : MySQL -- Web Server ...d'horloge. Merci de me faire une proposition forfaitaire pour cette appli, incluant: [se connecter pour voir l'URL] design/UI&UX design 2. back-end API & Database structure Server configuration & setting 3. Mobile app development (iOS and Android, both of them) UI&UX prototype Functional version & API integration 4. Beta testing & feedback ... ...développeur freelance d'appli IOS/Android (Windows serait un plus). Sur la base d'une idée originale et sur la base d'un site déjà existant (web to store), nous souhaitons développer rapidement une application. Cette application permettra à notre site de prendre tout son sens. Plus qu'un développeur IOS/Android... ...WILL TAKE TO COMPLETE THE JOB. *ESTIMATE* BACKEND CODED PHP WITH MYSQL DATABASE. 1. Add a button on the backend when approving members, if their photos is verified by a authentication photo when registering, when admin clicks (Approve + Verify) button. On the user profile for Android/IOS there is a green highlight (Photo Verified). 2. There is a button I need a classifieds app similar like Gumtree, OLX and Quikr, where people can po...similar like Gumtree, OLX and Quikr, where people can post the items they want to sell and buyer can choose and contact the sellers. 1. Should work for both IOS and Android 2. Need APIs and database as well, full architecture. 3. Integrate a payment gateway if required We are looking for a programmer for the development of an app for iOS and Android including website and database connection for our startup. The website should have the following functions: • Simple, modern website • Connection to a database (cloud) • Customer Portal ○ (Registration, login, possibility of renewal by subscription-model via I need a classifieds app similar like OLX an...app similar like OLX and Quikr, where people can post the items they want to sell and buyer can choose and contact the sellers. 1. Should work for both IOS and Android 2. Need APIs and database as well, full architecture. 3. Integrate a payment gateway if required 4. Should support both Arabic and english This is my 3rd project on this topic. See [se connecter pour voir l'URL] for the most completed one. And documentation [se connecter pour voir l'URL] for full description. I would like to have a stopwatch APP for Android similar to this (Stopwatch/Timer is one of the basic functions of the application): [se connecter pour voir l'URL] Some screens of Hi, I need someone who can create simple...(longitude and latitude) and save some detail (number and name, adress generate log) on a database, with dashboard. You use laravel, codeigniter or other php framework for admin panel. I have problem with it, but app need to be light and available on Android I will give you more detail in private Thank you We're are looking for a Web and Android App Developer to help us out with our existing mobile app project. We have an android app that is already built but we need someone to make edits on it and tie it to our website database. We have the full source code of the app. We need to make the following edits on the mobile app: 1. Change skins and logo of Application should run on IOS / Android. The User must be registered on the system and authenticate on the mobile device for use. There should be no permanent user and password request on the mobile device. That is, the user will authenticate and the device remains authorized to use the application. The user using the camera of the device should ...search for a specific store in our database and then view the media associated with the selected store. Also, they would be able to access interactive images with media tags in them, like how Instagram works with tagging. The tags are also stored in our database. It should work for iOS with easy future integration to Android OS. The app will let users search We want ...devices and save it to the database. If we use smart wrist, there must be an android app to get data and get connected to device. if we use smart clock, you have experiences to develop apps for smart watches. Web panel can be developed in asp.net or php but php is first choose for us. Mobile development must be only android. No need to iOS. We have the screen designs, flow documentation and the APIs ready. The developer has to build the android app along with the local database. The app has to be extremely light (less than 2 MB). Push notification (FCM) to be designed. She/he needs to capture the customer behaviour data (e.g. time spent in login etc) using log files. The developer has Need an android app for pushing messages to the members. Members must login via OTP (I have GET API for sending OTP). Member can request OTP only if his number is there in subscriber database. Messages sent over last 7 days should be visible. New message should be visible on top.
OPCFW_CODE
Karl Anderson (Digital Creations) has announced the 'stable' version 1.0 release of 'Parsed XML' which "allows you to use XML objects in the Zope environment. You can create XML documents in Zope and leverage Zope to format, query, and manipulate XML. Parsed XML consists of a DOM storage, a builder that uses PyExpat to parse XML into the DOM, and a management proxy product that provides Zope management features for a DOM tree... The Parsed XML product parses XML into a Zopish DOM tree. The elements of this tree support persistence, acquisition, etc. The document and subnodes are editable and manageable through management proxy objects, and the underlying DOM tree can be directly manipulated via DTML, Python, etc. The DOM tree created by Zope aims to comply with the DOM level 2 standard. This allows you to access your XML in DTML or External Methods using a standard and powerful API. We are currently supporting the DOM level 2 Core and Traversal specifications..." Zope is an open source toolkit consisting of "a number components which work together to provide a complete yet flexible application server package. Zope includes an internet server, a transactional object database, a search engine, a web page templating system, a through the web development and management tool, and comprehensive extension support. Zope's open support for web standards such as XML-RPC, DOM, and WebDAV allows unparalleled flexibility and interoperability." From the 'Vision for Parsed XML': "We want Zope to be a simple and natural platform for managing and leveraging XML content. Parsed XML is part of that vision. There are currently many ways to use XML in Zope, and that will still be the case. There will be several reasons why someone would want to use Parsed XML: (1) A general XML parser, storage, and output solution which doesn't need to be customized for any well-formed XML and which scales well. (2) XML-centric editing and management interfaces. (3) Support for standard XML tools, starting with DOM, that also scales well. (4) Preservation of XMLims. Namespaces, PIs, CDATA/PCDATA, etc. that are part of the input will be part of the output. Parsed XML is only a part of the Zope XML vision..." - Zope Project - ParsedXML 1.0 release note - README for Parsed XML - Links on Wiki front page - Vision for Parsed XML - "Zope: An Open-Source Web Application Server. [Review.]" By Brian Wilson. In WebTechniques Volume 6, Issue 4 (April 2001), pages 80-81. - "Zope: Open Source Alternative for Content Management. Zope Proves Utility of Open-Source Web Tools." By Mark Walter and Aimee Beck. In The Seybold Report on Internet Publishing Volume 5, Number 7 (March 2001), pages 11-15. In depth: 'SRIP looks at Zope, a free toolkit developed by Digital Creations that's gained favor among daily newspapers, corporations, government agencies and a host of Web startups. Included are details on Zope's new content-management framework, due out this spring.'
OPCFW_CODE
In a world where technology is constantly changing, where do you focus your time and energy to help customers maximize their revenue and growth? What do you prioritize as a business? In 2016, we commissioned a study by IDC to help guide partners in their prioritization. To build on this guidance, we asked key leaders from across the company to provide their perspective on the greatest opportunities they see for partners in 2017. We hope you find this information valuable in helping you realize your full potential as we kick off the year ahead! Today most companies are also becoming a technology business. In fact, IDC reports that 70% of the top 500 global companies will have dedicated digital transformation and innovation teams by the end of 2017. From running eCommerce sites to apps or inventing entirely new revenue streams by digitizing products and processes–the new digital economy is changing the face of business. Cognitive services and artificial intelligence (AI) will become increasingly important with CEOs and business owners wanting to use digital transformation, machine learning and internet of things (IoT) to improve customer engagement, empower employees, optimize operations, and transform products. According to IDC, investment in digital transformation initiatives will reach $2.2 trillion by 2019 (almost 60% more than this year) and by 2018, 75% of developer teams will include cognitive and AI functionality in one or more applications. Partners have an opportunity in 2017 to capitalize on this by bringing value-added services. Beyond helping customers, partners can transform their own businesses, shifting traditional business models from reselling or system integration to becoming more managed service providers or ISVs. Are you still trying to do everything for everyone or have you identified what you’re good at? There are many ways to differentiate – it doesn’t always mean creating a unique app. With the technology advances driving digital transformation, industries are regularly being reinvented. However, transformation is not just limited to technology. Your processes, your vertical, your customer identity, your go-to-market model can all be differentiating factors – but to do that, you must know who you are at the core. Differentiation is key to standing out in a competitive marketplace. Partners are differentiating their businesses by targeting verticals or industries, establishing a technology specialization and building intellectual property services. New intellectual property services are being created from advances in AI, bots, mixed reality and more. Keep learning, creating and finding new ways to differentiate yourself. The number of partners actively involved with one another in a digital ecosystem is dramatically increasing. According to a report by Gartner entitled The 2017 CIO Agenda: Seize the Digital Ecosystem Opportunity, “Top-performing organizations have, on average, 78 partners in their digital ecosystems, up from 27 partners two years ago. These organizations expect to nearly double the number of these partners to 143 in the next two years. “ How you define your business, and who you choose to partner with, must evolve as more and more companies embrace digital transformation. Defining your unique core is critical to finding truly complementary partnerships, so look inward to figure out your secret sauce. Remember that it’s not about becoming a different partner – it’s about doing things differently. A great example of this approach can be seen in Australia, in a partnership between Chartered Accountants of Australia and New Zealand (CA ANZ) and HubOne. This partnership, enabled by our Cloud Solution Provider model, allows CA ANZ member practices to purchase a unique, integrated solution developed and deployed in conjunction with HubOne, an Australian ISV. This partnership enables a customized solution that joins together multiple offerings, deployed immediately and easily within a customer’s environment. One of our CSP partners in the region, Rhipe, calls this opportunity the Internet of Partners – I call it a great way to build your business. A new competitive reality has emerged around Application Innovation. Today’s enterprises are in the ‘experience’ business now and must find ways to deliver high-quality mobile experiences to their users at scale. Everyone wants immersive and personal experiences that react in real time, are intelligent and brilliantly predictive. How you help your customers meet the new user expectation for these rich application experiences is critical. To do this, enterprises need to leverage data and cloud in new ways. Taking their entire business ‘mobile’ often means building hundreds of apps. In fact, according to a Gartner report, “demand for mobile apps outstrips available development capacity, thus making quick creation of front-end client apps even more challenging.” Embracing an ‘app factory’ mindset and delivering continuous innovation at scale means tremendous opportunity for Microsoft partners. With Visual Studio and Xamarin, your existing development teams become native mobile developers. With up to 90% code-sharing across device platforms and a complete mobile DevOps solution, you have the agility to build and maintain apps quickly and profitably. Every mobile app needs a secure backend. Azure makes this all easy to do, freeing your resources to focus on delivering your unique value. Predictive analytics and Azure data services help you create intelligent apps which drive deeper engagement and in turn greater business results. One of the biggest opportunities for partners in 2017 will be to deliver managed services that maximize their customers cloud experience. According to a study conducted by 451 research, managed services is projected to be a $43B market by CY2018, growing at a rate 60% faster than the growth in infrastructure only services. From consulting to migrations, to operations management, managed services provides you with an opportunity to add a new, higher margin business line that can provide a more stable, steady stream of recurring revenue. At our Worldwide Partner Conference in 2015, we launched the Cloud Solution Provider (CSP) program specifically for partners looking to tap into this booming opportunity. By owning the customer relationship end to end it provides the perfect platform for partners to build value-added services for their customers that will create stickiness and differentiation from the competition. The program has become so popular that today, we have over 20,000 partners transacting through CSP. Our research has shown that top performing cloud partners are leveraging the expertise of their technical staff to differentiate their business from the competition. How do they do it? They invest deeply in building their teams’ technical skills. Whether it be through sending their technical teams to in-person training events and conferences or carving out time for their employees to engage in online learning, it’s clear that top performing partners are committed to prioritizing continuous learning. With a skilled team, partners can build new business opportunities using the latest technology innovations like Artificial Intelligence, Bots, and the Internet of Things. Browse our training options and learn cloud development on your terms. They take an all hands-on deck approach to blogging. How best to help your business stand out than to have your cloud experts blog about the inspiring ways they have leveraged technology to meet their customer needs? They get involved in the community. Customers are increasingly looking to online forums such as LinkedIn, Stack Overflow and GitHub to identify experts who can assist them with technology challenges and digital transformation. To support you in your quest for increased profitability and growth, we have a number of tools, resources, and programs available to support you including the Microsoft Modern Partner ebook series as well as cloud development on-demand training. Innovation, through cloud and related emerging technologies, is delivering new ways to expand economic opportunity and address some of humanity’s most pressing problems. In fact, the World Economic Forum references “ICT as the backbone” of the 4th industrial revolution in which Cloud is the engine and data is the fuel. In keeping with our empowerment mission, Microsoft is building tools and resources to support cloud for global good aligned to the UN Sustainable Development Goals. There is no shortage of “impact” when partners leverage the cloud for global good. In Health, the Epimed ISV is partnering with 350 hospitals in Brazil using a mobile, Azure-based analytics solution and has reduced the rate of hospital-induced infections in Intensive Care Units by 20 percent. In supporting the Sustainable Development Goal Decent Work and Economic Growth, we and our partner, Sparked, a Netherlands-based company driving digital transformation for the Municipality of Hollands Kroon, helped the municipality close its own data centers and adopt cloud solutions. They are using Azure and Office 365 to improve workplace and business transformation. But the cloud also raises important questions about privacy, safety, and jobs. One of the greatest opportunities for partners in 2017 will be to promote positive change by ensuring that the benefits of cloud computing are broadly shared. I’d encourage our partners to familiarize themselves with two new tools: Cloud for Global Good and National Empowerment Plans. The Cloud for Global Good framework lists 78 policy recommendations designed to help government policymakers and industry establish the right environment for leveraging the power of the digital economy while minimizing its risks. The National Empowerment Plan provides a roadmap of technology solutions. Both are important resources as we prepare for the tremendous opportunities ahead.
OPCFW_CODE
|Aspera Sync / Sync Set Up| Sync reads configuration settings from aspera.conf, which can be edited using asconfigurator commands or manually. The following sections provide instructions for setting Aspera-recommended security configuration, instructions for how to edit other configurations, a reference for many of the available configuration options, and a sample aspera.conf. If Sync is installed on an Aspera transfer server, Aspera recommends setting the following configuration options for each user for greatest security. Additional settings are described in the following table. By default, Sync events are logged to the Aspera log (see Logging). Aspera recommends setting the log to a directory within the transfer user's home folder. For example: > asconfigurator -x "set_user_data;user_name,username;async_log_dir,log_dir" This setting overrides the remote logging directory specified by the client with the -R option. Sync uses a database to track file system changes between runs of the same session (see The Sync Database). The Sync database should not be located on CIFS, NFS, or other shared file systems mounted on Linux, unless you are synchronizing through IBM Aspera FASP Proxy. If server data are stored on a mount, specify a local location for the Sync database. Aspera recommends setting the log to a directory within the user's home folder by using the same approach as setting the local Sync log: > asconfigurator -x "set_user_data;user_name,username;async_db_dir,log_dir" This setting overrides the remote database directory specified by the client with the -B option. To configure Sync settings in aspera.conf by using asconfigurator commands, use the following general syntax for setting default values (first line) or user-specific values (second line): > asconfigurator -x "set_node_data;option,value" > asconfigurator -x "set_user_data;user_name,username;option,value" To manually edit aspera.conf, open it in a text editor with administrative privileges from the following location: C:\Program Files (x86)\Aspera\Point-to-Point\etc\aspera.conf See an example aspera.conf following the settings reference table. For an example of the asperawatchd configuration, see Watch Service Configuration. After manually editing aspera.conf, validate that its XML syntax is correct by running the following command: > asuserdata -vThis command does not check if the settings are valid. |Description and Value Options| |The number of seconds async waits for a connection to be established before it terminates. Value is a positive integer. (Default: 20) If synchronization fails and returns connection timeout errors, which could be due to issues such as under-resourced computers, slow storage, or network problems, set the value higher, from 120 (2 minutes) to even 600 (10 minutes). |Specify an alternative location for the async server's snap database files. If unspecified, log files are saved in the default location or the location that is specified by the client with the -B option.| |Value has the syntax lock_style: Specify how async interfaces with the operating system. Values depend on operating system. On Windows, the options are undefined or win32. storage_style: Specify where Sync stores a local database that traces each directory and file. Three values can be used: |Enable (set to true, default) or disable (set to false) Sync. When set to false, the client async session fails with the error "Operation 'sync' not enabled or not permitted by license".| |Specify an alternative location for the async server's log files. If unspecified, log files are saved in the default location or the location that is specified by the client with the -R option. For information on the default log file location, see Logging.| |Set the amount of detail in the async server activity log. Valid values are disable, log (default), dbg1, or dbg2.| |The number of seconds async waits for a non-responsive session to resume before it terminates. Value is a positive integer. (Default: 20)| |Specify the directory creation mode (permissions). If specified, create directories with these permissions irrespective of <directory_create_grant_mask> and permissions of the directory on the source computer. This option is applied only when the server is a Unix-based Value is a positive integer (octal). (Default: undefined) |Specify the mode for newly created directories if directory_create_mode is not specified. If specified, directory modes are set to their original modes plus the grant mask values. This option is applied only when the server is a Unix-based receiver and when directory_create_mode is not specified. Value is a positive integer (octal). (Default: 755) |Specify if the ACL access data (acls) or extended attributes (xattrs) from Windows or Mac OS X files are preserved. Three modes are supported. (Default: none) native: acls or xattrs are preserved by using the native capabilities of the file system. If the destination does not support acls or xattrs, async generates an error and exits. metafile: acls or xattrs are preserved in a separate file. The file is in the same location and has same name, but has the added extension .aspera-meta. The .aspera-meta files are platform-independent, and files can be reverted to native form if they are synced with a compatible system. none: No acls or xattrs data is preserved. This mode is supported on all file systems. ACL preservation is only meaningful if both hosts are in the same domain. If a SID (security ID) in a source file does not exist at a destination, the sync proceeds but no ACL data is saved and the log records that the ACL was not applied. The aspera.conf settings for acls or xattrs can be overwritten by using the --preserve-acls or --preserve-xattrs options, respectively, in a command-line async session. <file_system> ... <directory_create_mode> </directory_create_mode> <directory_create_grant_mask>755</directory_create_grant_mask> <preserve_acls>none</preserve_acls> <preserve_xattrs>none</preserve_xattrs> ... </file_system> ... <default> ... <async_db_dir> </async_db_dir> <async_db_spec> </async_db_spec> <async_enabled>true</async_enabled> <async_connection_timeout>20</async_connection_timeout> <async_session_timeout>20</async_session_timeout> <async_log_dir>AS_NULL</async_log_dir> <async_log_level>log</async_log_level> ... </default>
OPCFW_CODE
Be notified of new releases Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 28 million developers.Sign up Added new commands: bc-undo - the new method for undoing import and block commands. The undo list is now a single list for both commands. block /undo and import /undo are removed. bc-editmode - allows you to set the world into editmode, which enables features such as the vanilla prefab command for generating meshes, and importing without having to use the /editmode option. Be wary of using on populated servers as I have not tested the ramifications of doing so. Improvements and fixes: 'bc-assets itemicons' now returns Mods folder icons as well as the vanilla icons, along with a count of baked icons and total 'block density' now has a /force option to bypass the built in density validation based on block type (only block command supports this, not blockrpc) 'bc-go /filters' and 'bo-go /index' are now available for listing the filters that can be used for the various game object types bc-task - Added a status of Aborted (to go with InProgress and Complete) to indicate tasks that were canceled before completion (such as 'visit /stop'). Changed the output to be fully json encoded. Added the full command line text used to the output for each task listed. visit - improved the displayed text, includes progress % on both the message returned if you run it while it is in progress and the bc-task output. visit now has the option to give 4 params to define a multi region visit Fixed the positioning bugs in the export and block commands Added a check of tile entities and setting meta for locking doors on export Added bc-remove /ecname=zombieScreamer etc as a filter to remove all zombies with the same entity class New since version 2.2.3 Improved param validation and command feedback SetSkills - Added ability to edit skills while offline. Added unlock for locked skills to fix issues with mods changing skills Added BCPrefabs - Command to access the info and editing of prefabs in the server's prefab folder Added BCEvents - Command for controlling the events system and viewing data about the current state Altered BCRemove - Added a /minibike filter that is required to remove minibikes even when using 'bc-remove /all' Added BlockRpc - Command for single block rpc block changes, all without chunk reloading required. Changed Visit Region - Command now accepts multi-region params and reports percentage complete if calling visit while one is in progress Altered Chunk Observer - removed the y co-ord as it was irrelevant for chunk observers Added RemoveBuffFromEnemy - Remove a buff from any living entity Added meta1 / meta2 / meta3 sub command to both block commands. This allows for opening doors(e.g. blockrpc meta1 x y z /meta=1 or 0), or turning lights on / off (e.g. block meta1 x y z x2 y2 z2 /meta=2 or 0) with a command Added Events system DeadIsDead - Player file backup, restore, and trigger deadisdead boot and remove profile options LogCache - Can be used to display the data from the log entries to console PingKicker - Use threshhold and count settings to define when someone will be kicked for high ping PositionTracker - Record player movements over time Defined reactive vs heartbeat events. Reactive is set up to wait for an action to happen, heartbeat occurs on the designated ticks. Fixed up the code for many commands which will alter some of the params and options available. Help docs should be updated to reflect new usage Updated: added a filter to exclude player backpacks and loot container destroyed bags from the mutation Added support for Dropped Item Spawn Mutation to adjust lifetime and optionally log the id, stack count, name and position To enable the feature the heartbeat system must be enabled (in System.xml) and the spawn mutator enabled, with the "EntityItem" in the options list, and config="SpawnMutator" as a new attribute <Synapse name="entityspawnmutator" enabled="false" beats="30" options="EntityEnemy,EntityNPC,EntityItem" config="SpawnMutator"/> The SpawnMutator.xml (leave the .xml off in the config="" bit) is found in a new 'Events' subfolder in the config (or DefaultConfig) folder <SpawnMutator> <Lifetime value="120"/> <!-- seconds --> <LogDroppedItems value="false"/> </SpawnMutator> Update: There was a couple of issues with the bc-give command that are now fixed. - removed extra array wrappers on some datasets - Improvements to the bc-assets command - bc-assets meshes will return the embedded xml file converted into an object list for easier reading. - Added current ruleset to rwg data output - Improved bc-lp command - added a last saved property. This indicates how long ago the server received player data and saved it (online players only) - this will allow SM's to base update requests smartly on the last save time (or allow SM's to poll for the last save field only, then call a full update request when it resets) - Added more null checks, especially around the datacache used for online player data - Limited the array returned by the equip filter to only the slots where items can be placed (instead of a 32 value array with many nulls) - bc-version - added the full version info for 7d2d entry (i.e 16.4 (b8) instead of just 16.4) - Added bc-give - allows admins to place items directly into player inventory, rather than throw it on the ground (see help bc-give for more details) - Added a /sm flag, which returns the help output in json format - Added details json info for individual commands (i.e bc-help /sm version would output the info from help version but encoded in json format) An update to the Web UI will follow later (most likely this weekend) Added additional info to the bc-go prefabs command Be aware that on the exec after a server reboot the bc-go prefabs command will take a while (approx 7-10 secs for vanilla list) After that, the data will be held in memory so the prefabs won't need to be loaded from disk. To cache the data the first run should use /full. After that, a filtered lookup will return just the requested info. This will be improved later to build the cache regardless of info requested by the first exec. Big refactor of import and block commands: import now uses the same area defining system as tile entities. The name now comes at the start rather than the end of the params list blocks now have subcommands instead of /options for the different types. Additional subcommands has been added (see help for more details): scan - gives stats on blocks within the area given - param * for block name will give stats on all blocks fill - the block specified is rendered into the area given or the area between the stored loc (see bc-location) and your current location swap - replaces the second block specified with the first block specified within the area given. repair - removes all damage from blocks in the area damage - causes damage to blocks in the area upgrade - sets the blocks to the next step in their upgrade path, if any downgrade - sets the blocks to the next step in their downgrade path paint - paints all sides of blocks in the area paintface - paints one side of blocks in the area paintstrip - removes all paint from blocks in the area density - sets density on blocks in the area rotate - rotates blocks in the area tile now has a settext subcommand for editing sign texts Recompile for 16.4 Minor tweak to bc-time Fixed the function name references for the bc-sleeper and bc-reset chunk commands Minor breaking change with a few commands. The player persistent data has an additional field. It should be compatible with older saves, but if you have any issues delete the BCMData folder in your save game folder. Full details on the forums Added bc-sleeper command bc-sleeper list - full world sleeper volume list bc-sleeper chunk - list of volumes in a chunk bc-sleeper clear - clear volumes from a chunk bc-sleeper volume - detailed info on the given volume Added bc-reset command bc-reset chunk cx cz - will reset the given chunk Added bc-task command - this allows you to see what subthread tasks are currently running, and potentially output from completed tasks. Added async options to bc-te - use /forcesync if you want to ensure it runs synchronously (mostly for web api commands). In console and telnet the async reply will be sent when the task is complete. For web api requests the result is sent to log as well as stored in bc-task output. Added spawn command - spawn horde and spawn entity options. You can define both the spawn area, and the moveto location, as well as a count and min/max radius from spawn etc.
OPCFW_CODE
- Email: email@example.com - Registered on: 07/23/2010 - Last connection: 10/20/2017 - Suricata (Manager, Developer, 12/30/2010) - 08:21 AM Suricata Bug #2249 (New): rule with file keyword used with ip or tcp not seen as invalid - Currently signature using ip and tcp and using a file keyword like filemd5 are not valid in the sense they will not m... - 08:12 AM Suricata Feature #2213: file matching: allow generic file matching / store - This feature is also a bug as there is no warning on a rule like:... - 03:45 AM Suricata Revision 7ee989a3: prscript: update urls to use OISF repo - 04:12 PM Suricata Optimization #2218: Leave TSO enabled for Linux AF_PACKET runmode - In your test, you are testing the local stack not Suricata. In most cases, suricata is handling a copy of the traffic... - 03:22 PM Suricata Optimization #2218: Leave TSO enabled for Linux AF_PACKET runmode - I agree TSO could be interesting to keep. What is your test ? - 01:10 AM Suricata Bug #2217: event_type flow is missing icmpv4 (while it has icmpv6) info wherever available - This behavior has been introduced by commit:548a3b2c93aed79e39a34ee9dd4c68f43a27f363. Idea was not to create flows fo... - 01:56 AM Suricata Revision 8fa6e065: af-packet: free bpf program - This fixes a small memory leak when Suricata is running with a - 01:56 AM Suricata Revision 7127ae2b: af-packet: call thread deinit function - 01:51 AM Suricata Revision 620f2540: prscript: update docker code - Update docker code to latest docker python API. This patch preserves backwrd compatibility with older versions. - 09:59 AM Suricata Feature #2199: DNS answer events compacted - Regarding the format, i'm sure some people will be interested only by the "metadata" part. Other will want the detail... Also available in: Atom
OPCFW_CODE
A variation of this post originally appeared on the Rails Masterclass newsletter. Learning new technologies can be a challenge. With books, forums, videos, blog posts…where do you start?!?! I’ve used all these resources in the past and only one thing has reliably led to me to really learn the concepts - building something REAL! How I learned Ruby and Rails Several years ago, I was learning both Ruby and Rails and trying to figure out which books were worth the time. I ended up starting with Agile Web Development with Rails, which was a fantastic introduction to the features and benefits you’ll get from learning the Rails framework. However, like many other books, it guides you through building a website with little real world value - an e-commerce shopping cart (you’ve heard this one before, right???). The concepts in the book generally apply to other products built with Rails, but it was sometimes hard for me to see their value elsewhere. I found that the books and articles often skipped over things that were fairly trivial for a more experienced developer, but essential to get a real world application out the door. The best way to learn (with a catch) I propose you build something…anything really. But it would be more useful if the thing you build is useful to either you, someone else, or even better…everyone! But there’s a catch…whatever you build, you must not only put it out there for the world to see, but you must share it in whatever way you feel comfortable. My Craigslist Scraper A few years ago, I was in the market for a new job. I built a small Craigslist scraper that searched for “my perfect job” and had it notify me via email whenever it found a similar post. The process of creating this little scraper did two things for me. First, it allowed me to learn (I wrote it in Perl and at the time knew very little about the language…). And second, it saved the 15 minutes or so it would take me each day to sift through the new job postings. It really was a win-win! Start to Finish It’s easy to copy and paste code snippets from a blog post or book. It’s much harder to see these applications through to the point where they’re useful. Who cares if no one uses it? The value you get from seeing your application code all the way to the point of production is exponentially greater than the value of having run “rails server” on your local development machine. Questions such as “How will I log and be notified of errors?” and “How will I backup my production data store?” take time and thought to answer. There are so many easy to use PAAS platforms now-a-days, even if you’re not the devops guru that you aspire to be, you too can have production code running with minimal effort for FREE! Heroku is a great example of a place that you can host your Rails applications without hassle or cost. If you have an application running locally that you’d like to share, hosting it at Heroku is a no-brainer. Building an application that can save you time in your daily routine (ie. task tracker, appt. reminder, etc.) is a great first step to learning more. It’s also very possible that if the application helps you, it could help other people. I’ll leave the marketing talk for another time, but stepping back and being an actual user of your application will cause you to look at it in a very different light. It’s likely that you’ll find ways to drastically improve your application within the first 10 minutes of usage. Sharing (this is my favorite part) You’ve now spent several weeks/months building something interesting - tell the world about it. Tweet about it, post it to your local user group mailing list, email your friends, anything…the more people that see it, the better. Like dogfooding, you’ll get valuable feedback that you might not have gotten otherwise. I can hear all the perfectionists now…“I don’t want to share it until it’s perfect”. Having worked on a number of large applications, the candid truth is that it will NEVER be perfect. The feedback you’ll get sooner is more valuable than the time you’ll waste by attempting perfection. Besides, I have a feeling that most the features you thought you had to have before you launched are unimportant to the people that actually want to use your application. There’s a more subtle point to sharing your new application…subconsciously you’re likely not going to share something that you think is crappy. If your goal is to ultimately share what you’ve built, you’ll put the work in to make it worthy of sharing, and in the process, learn about the finer details of releasing a web application in to the wild, not just a basic shopping cart that has minimal real-world value. Where to go from here? Knowing what you want to build is easier said than done. I keep a list of ideas that I ultimately want to build someday. Some have very little use outside my own world and others probably have potential to be used by other people, but either way, bringing these ideas to life with new technologies or concepts really helps me to solidify their real-world value. If you don’t have ideas, a quick Google search for “startup ideas” will provide you with some very interesting results. However, I’d think about the things you’re doing everyday and ask yourself if any of them can be solved, or aided, with software. If the answer is “yes” to any of them, you have yourself a great candidate. I’d love to hear about what you’re building and the challenges you’re facing.
OPCFW_CODE
I spent some time googling my question but merely found answers related to ROOT vX with X<6 or answers that were not really related to my question. Apologies in case I still missed if this discussion has been answered some place else (given that it is quite obvious). For completeness: I am using ROOT v6.08. I would like to ask if it was possible to use the LaTeX symbol \ell in a TH1 axis title or label. For ROOTv5 a colleague of mine has found a way to use the symbol in a TMathText environment (though italic I found it to be displayed in an unusually strong inclination), but of course not for the axis, but only for a text element in the plot. As I understood from and past posts in this forum, TLatex (and therefore the TH1 axes) implement a subset of the math symbols available in LaTeX but I frankly do not see the logic in the selection of this subset. E.g. \ell is a quite common symbol for leptons and it would be well appreciated if this could be used in the axis titles. Also, I do see that \slash (as in ETmiss, which is barely used as people have moved to pTmiss) is implemented but \mathcal isn’t, which would be good for, e.g., branching fraction symbols. Bottomline my question is: is \ell still not implemented in ROOTv6 and if yes will it be implemented in one of the upcoming versions? Alternatively, I’d like to ask if there is a way fo emulate the symbol \ell? I’ve played around with TTF, but haven’t been successful (yet). Obviously, I’d like to avoid (for now) painting a TMathText with solid fill color over the axis title. Thank you and regards! - https://root.cern.ch/doc/master/classTLatex.html works fine for me with ROOT 6.10/06, what version are you using? you need to escape the “\” The right syntax is: See this example . Thank you for your answers. The solution by couet works if I open ROOT directly and build a histogram there. But if I want to use \ell in pyROOT it does not seem to work. I.e. my syntax is This merely results in #ell#ell (as subscript to M) being written in the axis. I am using ROOT v6.08 and python 2.7.10. I am not an expert of python … but the idea is simply to find the way to escape the backslash … or may be in python escaping is not needed ? … It is needed, so in a python string you’d put \ to have one . For a short period of time I thought \ was required by ROOT and therefore I put \\, which results in ##ell. Hm, maybe its just the wrong data type after all and it needs some more C++ conform type… The double backslash is required by C++ not by ROOT. See the example I pointed. Yeah right, that’s why I put \\ for python too. OK I now tried in python prompt and there it works. Must be a problem of my script then. python is very relaxed regarding escaping, it seems, so it accepts \e as such and doesn’t raise an error. But yeah, I forgot the escaping in my post. Note that you can use “raw strings” in python, which don’t do backslash escaping, to make it a bit more readable (note the r in front of the string): Sorry for my earlier confusion. The problem is a different one: \\ell works perfectly fine also in my script. The problem occurs when exporting to pdf. In png and eps I can see the ell properly but the pdf places an #ell rather than the ell symbol. Is this expected and if yes, how can I overcome it? Yes it is expected. TMathText specific characters like ell are not implemented in pdf. This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
import ast, math import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Purpose of this file: - Evaluate the transition matrix by computing the likelihood of each transition (i.e. the probability of any observation given the history) in the historical data, according to this transition matrix. The evaluation metric is the average of the logs of these likelihoods. The algorithm for evaluating the transition matrix (computing the likelihoods) is as follows: - For each trajectory, what we do is start with a belief state according to the specified initial proficiency. Now, at each time step, we multiply the belief state by the transition matrix to obtain a new belief state. We use this new belief state to calculate the probability of the observed proficiency levels. We finally update this belief state by setting the probabilities of impossible proficiency level combinations to 0, and then normalizing. We move to the next iteration. """ settings = open("settings.txt", 'r') NUM_KC = int(settings.readline()) NUM_PLEVELS = int(settings.readline()) settings.close() #Functions from parseDatasheet.py def encode_state(state): res = 0 for i in range(len(state)): res += state[i] * NUM_PLEVELS**(len(state)-1-i) return res def decode_state(num): res = [0] * NUM_KC for i in range(NUM_KC): res[-1-i] = num % NUM_PLEVELS num = num/NUM_PLEVELS return res def state_match(s1,s2): for i in range(NUM_KC): if s1[i]!=-1 and s1[i]!=s2[i]: return False return True #This tells us the proficiencies revealed by each action. action_related = pd.read_csv("action.csv") reveal_dict = {} for i in range(action_related.shape[0]): action_name = action_related.iloc[i]["action"] reveal_dict[action_name] = action_related.iloc[i]["related_kc"] #Get list of actions #Use same order as in parseDatasheet.csv actions_sorted = [] for action in reveal_dict: actions_sorted.append(action) actions_sorted = sorted(actions_sorted) print(actions_sorted) action_index = {} for i in range(len(actions_sorted)): action_index[actions_sorted[i]] = i #Use test_train_split.txt to determine which trajectory to start from. #Find index of the (train_num)^th trajectory, and discard the trajectories #which come before. test_train_split = open("test_train_split.txt", "r") train_percent = int(test_train_split.readline()) validation_percent = int(test_train_split.readline()) test_train_split.close() #Find index of the (train_num)^th trajectory, and discard the trajectories which come before. historical_data = pd.read_csv("MDPdatasheet.csv") trajectory_count = int(historical_data.iloc[-1]["Student_ID"]) + 1 train_num = math.floor(trajectory_count * (train_percent + validation_percent)/100) start_index = -1 for i in range(historical_data.shape[0]): if historical_data.iloc[i]['Student_ID'] == train_num: start_index = i break historical_test_data = historical_data.iloc[start_index:] #Find average log-likelihood of all the transitions. #Use belief state to summarize all of the history until a time step. #Only count transitions where some KCs were actually observed. transition_matrix = np.load("P.npy") sum_log_likelihoods = 0 num_transitions = 0 belief_state = np.zeros(NUM_PLEVELS ** NUM_KC + 1) for i in range(historical_test_data.shape[0]): current_action = historical_test_data.iloc[i]['Action_Types'] if current_action == 'Prior Assessment Test': #Reset belief state at the start of episode new_initial_prof = ast.literal_eval(historical_test_data.iloc[i]['Cur_Proficiency']) state_index = encode_state(new_initial_prof) belief_state.fill(0) belief_state[state_index] = 1 #Debug print("================================================") elif current_action == 'Final Exam': #Belief state is not updated --- the assumption of the model #is that final exam simply moves student to terminal state. observation = ast.literal_eval(historical_test_data.iloc[i]['Cur_Proficiency']) encoded_final = encode_state(observation) final_likelihood = belief_state[encoded_final] sum_log_likelihoods += np.log(final_likelihood) num_transitions += 1 #Debug print("Final likelihood: " + str(np.log(final_likelihood))) print("================================================") else: action_transition_matrix = transition_matrix[action_index[current_action]] #Note: The rows of action_transition_matrix represent the starting #state, and the columns represent the ending state! belief_state = np.matmul(belief_state, action_transition_matrix) #If assessment test, then update our log-likelihood average. #Also update belief state by assigning 0 probability to impossible #states, and normalizing the rest. if current_action[:2] == 'AT': revealed_proficiencies = ast.literal_eval(historical_test_data.iloc[i]['Cur_Proficiency']) #Convert from dictionary to list possible_states = [-1] * NUM_KC for kc in range(NUM_KC): if kc in revealed_proficiencies: possible_states[kc] = revealed_proficiencies[kc] likelihood = 0 for state_index in range(NUM_PLEVELS ** NUM_KC): decoded_state = decode_state(state_index) if state_match(decoded_state, possible_states): likelihood += belief_state[state_index] else: #This state is not possible after new observation belief_state[state_index] = 0 print(np.log(likelihood)) sum_log_likelihoods += np.log(likelihood) num_transitions += 1 #Normalize the belief state sum_beliefs = 0 for state_index in range(NUM_PLEVELS ** NUM_KC): sum_beliefs += belief_state[state_index] for state_index in range(NUM_PLEVELS ** NUM_KC): belief_state[state_index] /= sum_beliefs print("Overall Average Log-likelihood: " + str(sum_log_likelihoods/num_transitions))
STACK_EDU
Bob Coecke, Chief Scientist at Quantinuum Bob Coecke is Chief Scientist at Quantinuum, Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics, Emeritus Fellow at Wolfson College Oxford. Previously he was Professor of Quantum Foundations, Logics and Structures at the Department of Computer Science at Oxford University, where he was 20 years, and co-founded and led a multi-disciplinary Quantum Group that grew to 50 members and he supervised close to 70 PhD students. He pioneered Categorical Quantum Mechanics (now in AMS’s MSC2020 classification), ZX-calculus, DisCoCat natural language meaning, mathematical foundations for resource theories, Quantum Natural Language Processing, and DisCoCirc natural language meaning. His work has been headlined by various media outlets, including Forbes, New Scientist, PhysicsWorld, ComputerWeekly. Dr. Samantha Kleinberg, Associate Professor, Charles V. Schaefer, Jr. School of Engineering and Science, Department of Computer Science, Stevens Institute of Technology Samantha Kleinberg is an Associate Professor of Computer Science at Stevens Institute of Technology. She received her PhD in Computer Science from New York University and was a Computing Innovation Fellow at Columbia University in the Department of Biomedical informatics. She is the recipient of NSF CAREER and JSMF Complex Systems Scholar Awards. She is the author of Causality, Probability, and Time (Cambridge University Press, 2012) and Why: A Guide to Finding and Using Causes (O’Reilly Media, 2015), and editor of Time and Causality Across the Sciences (Cambridge University Press, 2019). Dr. Vipin Kumar, Regents Professor and William Norris Endowed Chair Department of Computer Science and Engineering Director, CSE Data Science Initiative, University of Minnesota Vipin Kumar is a Regents Professor at the University of Minnesota, where he holds the William Norris Endowed Chair in the Department of Computer Science and Engineering. Kumar received the B.E. degree in Electronics & Communication Engineering from Indian Institute of Technology Roorkee (formerly, University of Roorkee), India, in 1977, the M.E. degree in Electronics Engineering from Philips International Institute, Eindhoven, Netherlands, in 1979, and the Ph.D. degree in Computer Science from University of Maryland, College Park, in 1982. He also served as the Head of the Computer Science and Engineering Department from 2005 to 2015 and the Director of Army High Performance Computing Research Center (AHPCRC) from 1998 to 2005. Kumar’s research spans data mining, high-performance computing, and their applications in Climate/Ecosystems and health care. His research has resulted in the development of the concept of isoefficiency metric for evaluating the scalability of parallel algorithms, as well as highly efficient parallel algorithms and software for sparse matrix factorization (PSPASES) and graph partitioning (METIS, ParMetis, hMetis). He has authored over 300 research articles, and has coedited or coauthored 10 books including two text books “Introduction to Parallel Computing” and “Introduction to Data Mining”, that are used world-wide and have been translated into many languages. Kumar’s current major research focus is on bringing the power of big data and machine learning to understand the impact of human induced changes on the Earth and its environment. Kumar served as the Lead PI of a 5-year, $10 Million project, “Understanding Climate Change – A Data Driven Approach”, funded by the NSF’s Expeditions in Computing program that is aimed at pushing the boundaries of computer science research. Kumar has served as chair/co-chair for many international conferences in the area of data mining, big data, and high performance computing, including 25th SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2019), 2015 IEEE International Conference on Big Data, IEEE International Conference on Data Mining (2002), and International Parallel and Distributed Processing Symposium (2001). Kumar co-founded SIAM International Conference on Data Mining and served as a founding co-editor-in-chief of Journal of Statistical Analysis and Data Mining (an official journal of the American Statistical Association). Currently, Kumar serves on the steering committees of the SIAM International Conference on Data Mining and the IEEE International Conference on Data Mining, and is series editor for the Data Mining and Knowledge Discovery Book Series published by CRC Press/Chapman Hall. Kumar has been elected a Fellow of the American Association for Advancement for Science (AAAS), Association for Computing Machinery (ACM), Institute of Electrical and Electronics Engineers (IEEE), and Society for Industrial and Applied Mathematics (SIAM). He received the Distinguished Alumnus Award from the Indian Institute of Technology (IIT) Roorkee (2013), the Distinguished Alumnus Award from the Computer Science Department, University of Maryland College Park (2009), and IEEE Computer Society’s Technical Achievement Award (2005). Kumar’s foundational research in data mining and high performance computing has been honored by the ACM SIGKDD 2012 Innovation Award, which is the highest award for technical excellence in the field of Knowledge Discovery and Data Mining (KDD), the 2016 IEEE Computer Society Sidney Fernbach Award, one of IEEE Computer Society’s highest awards in high-performance computing, and Test-of-time award from 2021 Supercomputing conference (SC21).
OPCFW_CODE
How to control 120V power outlet with low voltage I would like to power a 120V attic fan every time my AC comes on. The low voltage wire connecting the thermostat with the compressor (to control the compressor) runs through the attic (as the compressor is on the roof) and I was thinking to put some sort of junction in the attic so the same wire that turns the compressor on also controls the attic fan. I can easily branch off a ceiling light box to install a box to power this, outlet or direct wiring. My question is: which electrical device (switch?) can I use to input 24V and release or stop 120V outlet or direct wiring (on a separate circuit than the low voltage)? I imagine something like a power transformer that hangs on the outlet (input), and an input from the low voltage wire which controls another outlet (out) depending on whether it is charged or not. I am asking this because the fan that I have is not smart and does not take thermostat input, only 120V on/off. So I am trying to control the release of the 120V every time the AC is on, as the control wire for it runs near. A relay, just like the other stuff your thermostat controls uses for this purpose. Not surprisingly, you'll find that 24VAC coil 120/240VAC contact relays are very, very common, and available in a wide range of ratings. You'll need one rated to operate the horsepower of your fan motor (or more, but not less) at 120V. It may also be rated for more horsepower at 240V, but you need HP at 120V rating if the fan motor is wired for 120V. do relays usually come with e plug outlet? No, they are usually wired into a device, or junction box. However, such things are made, taking a quick look. As you know, product recommendations are off topic, but the search term 24VAC relay controlled recptacle got some results like that, among others. But the plentiful ones readily available from electronic parts suppliers are not UL-listed. They are RU-Recognized but that's not the same thing at all because it fails to address a bunch of stuff about packaging, protection and fitment. Further, you can't mix 24V and 120V in the same box, so how are you going to hook it up? Right answer is something like an Aube, which is UL listed and has its wires going to appropriate places. Connect a relay that has a 24V AC coil between the C wire and the Y wire use the relay contacts to switch your extra fan. If your goal is saving energy by pulling hot air from the attic when the air conditioning cycles on, your efforts may be counter productive. Research has demonstrated you may be pulling conditioned air out of your home by introducing a negative pressure in the attic. The ceiling contains many penetrations for wiring, lighting and plumbing. These penetrations provide a path for indoor air to be pulled by the attic fan out of the home. Your home is probably pressurized by the furnace or air handler and the attic fan makes the problem worse. Your best bet if possible, is using large static ventilation at the peak of the roof or expanding the area of gable end venting or other alternatives. The fan you install or have existing adds to your utility bill and is a point of repair or maintenance. Interesting thought, but I'd think going to a hot roof or, barring that, properly air sealing the top ceiling plane would be a better bet yet?
STACK_EXCHANGE
On the Desktop Linux in the news All in one big page See also: last week's Distributions page. Lists of Distributions Please note that security updates from the various distributions are covered in the security section. News and Editorials Coyote Linux rethinks revenue model. We have been following and reporting on Coyote Linux since December of 1999. Coyote Linux is a single floppy distribution that turns a PC into a simple masquerading router/firewall in order to share an Internet connection among computers on a LAN. In August of 2000, Coyote Linux Pro was announced. This version of Coyote Linux came with a configuration tool that ran under Windows, to allow for easy configuration of Coyote Linux for non-Linux users. This tool, Coyote Windows Disk Creator, was closed source and proprietary. While Coyote Linux itself continued to be freely available with source, Coyote Linux Pro cost $50. The money was then used to pay for the bandwidth and co-location costs for the Coyote Linux website. This week, the Coyote Linux website reported that sales of Coyote Linux Pro had been terminated, due to the constant, repeated receipt of bogus orders, to the point that the revenue from the sale of Coyote Linux Pro had been destroyed, prompting discontinuation of its sale. The future of the entire Coyote Linux project was also in doubt as a result. It seems likely, though it will always be unprovable, that the bogus orders were specifically generated in order to punish the author from choosing to bundle a proprietary tool with their product. Although we're well-known advocates against the inclusion of proprietary tools, the idiocy of campaigning against them by using bogus credit card orders is simply astonishing. Anyone that didn't like the inclusion of the proprietary configuration tool had plenty of good alternatives: choose not to use the software, choose another software package to do the same job, or, if none of them met your criteria, develop one yourself that did. Those are all acceptable methods of supporting Free Software. Lying and cheating are not acceptable. Anyone who has done this owes Coyote Linux author Joshua Jackson an apology and some money. Of course, perhaps there really are that many dishonest people who wanted the Windows configuration tool but were unwilling to pay for it. That would also be sad and pathetic, but would reflect less on the morals of some Free Software advocates. Meanwhile, Coyote Linux is not gone. Any project that can generate 2GB of traffic per day has a lot of supporters and those supporters have managed to convince Joshua to keep going. Check this description of his plans for the future. Revenue generation will come, instead, from expanding the number of sites hosted by his small company, Vortech.net. If you need a place to host your own site, consider them. You'll have the satisfaction of also supporting a free software project. Let's hope this method will produce the revenue that he needs. Oh yes, the Coyote Wizard source code has also now been released. However, since it is written in Borland Delphi, you'll need to use that proprietary tool in order to work with it. Debian prepares to freeze. Debian is a distribution in search of the perfect freeze process. So far, they haven't found one. Debian freezes tend to start early, but end late, in spite of large amounts of effort. In addition, the length of a freeze has meant that the final product, once released, contains old versions of many popular applications (newer ones weren't allowed in due to the freeze). We're pleased, though, to see that they are continuing to examine the problem and look for innovative solutions. This time, Anthony Towns, the Woody release manager, posted to debian-devel his plans for the upcoming freeze. "So, what I've been thinking, and what I'm (belatedly) proposing, is to roughly invert the test cycles and the freeze itself, so that instead of freezing everything then doing test cycles to work out where we're at, we instead choose some part of Debian to test, test it, and, if it's good enough, freeze it. Once everything's successfully tested and frozen, we release." The estimated length of the freeze cycle is still five months (and that is considered highly optimistic). We certainly wish them luck; we'd certainly enjoy announcing a new Debian distribution in July or August, only a year after the previous one. Slackware News. Slackware modified its installation defaults for XFree86 this past week, now choosing to follow the XFree86 defaults. Those, in turn, have been approved as part of the Filesystem Hierarchy Standard (FHS). The upgrade script for the next version of Slackware will be responsible for converting from the old style install to the new. Slackware admins will have to remember not to look under /var for the actual install, but will gain more compatibility when using/supporting non-Slackware systems. glibc2.2.2 was installed under the Intel and Sparc platforms, resulting in a few minor changes. XFree86 4.0.2 was also installed (on Intel), with 4.0.1 getting removed and XFree86 3.3.6 being moved to a new directory called "/pasture". Jesper Juhl also posted an article on upgrading from KDE 1.1.2 to 2.0.1 on Slackware. Linux-Mandrake News. MandrakeSoft has announced that it will be supporting the PHP-Nuke project. PHP-Nuke is a PHP-based system which makes it easy to create online community sites. MandrakeSoft's support would seem to take the form of hiring the PHP-Nuke team. SmoothWall News. The SmoothWall team made it out to this year's Open Source and Free Software Developers' Meeting (OSDEM). They've published a news item about the event and their attendance. Team members sported the first-ever SmoothWall T-shirt. "The theatre started filling until they were around 270 people seated ready to listen to the SmoothWall talk (available in media streaming format on www.opensource-tv.com - go watch :) ). It seemed to go well. We had an hour to fill and we overran with question and answer sessions with attendees from as far as Russia who were using SmoothWall." SmoothWall is a GPL Linux distribution specifically designed to be a router and a firewall. SmoothWall is based on VA Linux 6.2.1 "which is an optimised RedHat 6.2 build customized in the labs at VA Linux". Note that SmoothWall is not a VA product, just based on one. Trustix News. Trustix, a general-purpose Linux distribution out of Norway that has an emphasis on security, announced this week plans to jointly develop a Linux training program with IBM. This adds Trustix to the list of distributions with which IBM is currently working: Caldera, Red Hat, SuSE and Turbolinux. It also appears to be a new entry into the certification wars, "IBM and Trustix are planning to take the co-operation in a direction where joint seminars are being offered, and new training packages as well as a certification programme are developed". Debian News. The latest list of packages needing work has been distributed. This is where you'll find packages that have been offered up for adoption or orphaned. With the announcement of the planned Debian freeze also comes the renewal of the party season -- the bug-squashing party season, that is. Here is your invitation for the first of this round of frivolity. For more Debian news, check out this week's Debian Weekly News. In it, you'll learn why the boot-floppies team is looking for help and hear concerns that Debian doesn't have enough hardware to handle all of its auto-builds, particularly for the m68000 platform. Got any old hardware you want to donate? Minor Releases. Released this week: Section Editor: Liz Coolbaugh February 22, 2001
OPCFW_CODE
Delphi 2014.R2 (Autocom) Diagnostics Software Setup Free REPACK Download File >>> https://tlniurl.com/2tz5k8 How to Install and Activate Delphi 2014.R2 (Autocom) Diagnostics Software for Free Delphi 2014.R2 (Autocom) is a powerful diagnostics software that can work with various types of vehicles, such as cars, trucks, buses, motorcycles, etc. It can perform various functions, such as reading and clearing fault codes, displaying live data, performing actuator tests, coding and programming, etc. In this article, we will show you how to install and activate Delphi 2014.R2 (Autocom) diagnostics software for free on your computer. A compatible diagnostic interface, such as Delphi DS150E (New VCI), Autocom CDP+ or Multidiag Pro+. A Windows XP or Windows 7 computer with internet connection. A USB cable to connect the diagnostic interface to the computer. A link to download the Delphi 2014.R2 (Autocom) diagnostics software setup file and the activation tools. You can get it from here: https://www.totalcardiagnostics.com/pages/multidiagpro/mdp-new-vci-cars/ Turn off your internet connection and antivirus software. Run the Delphi 2014.R2 (Autocom) diagnostics software setup file and follow the instructions. Choose "DS150E (New VCI)" from the drop-down menu and leave the destination folder as default. Let the installation complete. Go to the folder where you installed the software, which is usually C:\\Program Files\\Delphi Diagnostics\\DS150E (New VCI) or C:\\Program Files (x86)\\Delphi Diagnostics\\DS150E (New VCI), and delete the entire folder called "data". Extract the activation tools zip file that you downloaded and copy or move all the files inside it to the same folder where you installed the software. Overwrite all the files when prompted. Launch the software from your desktop by clicking on the icon called "DS150E New VCI". Click on "Start" and then click on "Yes" to save a file called "FileActivation" on your desktop. The software will show an error message, but do not close it. Open the activation tool called "Activator - AutoCom 2014.2" and click on "Open and Activate". Browse to your desktop and select the file called "FileActivation". The tool will activate the file and close itself. Go back to the software that is still open in the background and click on "Start" again. This time, click on "No" when asked to save a file. Browse to your desktop and select the file called "FileActivation" again. Wait for the installation to complete. The software will now register and launch automatically. When it prompts you to update the software from internet, choose "No". Do not update the software from internet, otherwise you will lose the license and it will be very hard to reinstall it again. Connect your diagnostic interface to your computer with a USB cable and to your vehicle with an OBD cable. Turn on your vehicle's ignition. Inside the software, click on "Settings" and then on "Hardware setup". Select "USB/BT (Com-port)" and then click on "Test". The software should detect your diagnostic interface and show its serial number. Click on "Select" and then choose your vehicle's make, model and year from the list. The software will load the appropriate database for your vehicle. Click on "Diagnostics" and then choose a function that you want to perform, such as reading fault codes, displaying live data, performing actuator tests, coding and programming, etc. Follow the instructions on the screen. Congratulations! You have successfully installed and activated Delphi 2014.R2 (Autocom) diagnostics 061ffe29dd
OPCFW_CODE
Dispose should not be called on this object. // SPDisposeCheck comment In the code below, within using directive, for rootweb, Dispose(); method will be called implicitly, is that correct? Or should I need a using directive for this: SPWeb rootweb = site.RootWeb since I am not using new keyword? The reason I am asking because SPDisposeCheck gives following comments: Dispose should not be called on this object. Initial Assignment: rootweb := site. {Microsoft.SharePoint.SPSite}get_RootWeb() public override void FeatureActivated(SPFeatureReceiverProperties properties) { var site = properties.Feature.Parent as SPSite; if (site != null) { AddTaxonomyField(site); } } private void AddTaxonomyField(SPSite site) { SPContextManager.Current.LogManager.LogInformationToDevelopers(String.Format("Adding {0} to {1}.", CorCatColumn, site.Url)); try { using (SPWeb rootweb = site.RootWeb) // At this line { if (!rootweb.Fields.ContainsField(CorCatColumn)) { TaxonomySession session = new TaxonomySession(site); TermStore ts = session.DefaultSiteCollectionTermStore; if (session.TermStores.Count != 0) { string taxonomyField = "SomeField"; string noteField = "SomeField"; rootweb.Fields.AddFieldAsXml(noteField); rootweb.Update(); rootweb.Fields.AddFieldAsXml(taxonomyField); rootweb.Update(); } } } } catch (Exception ex) { SPContextManager.Current.LogManager.LogErrorToDevelopers(ex); } } Where do you get the SPSite you pass into AddTaxonomyField? @Jussi, I have updated my question, SPSite is coming from FeatureActivated() method, Whats your comments now? Updated the end of my answer. Do not explicitly (or via using) call Dispose() on the SPSite.RootWeb property. The dispose cleanup will be handled automatically by the SharePoint and the .NET framework. For existing SharePoint customizations removal of explicit RootWeb Dispose is recommended to avoid an edge case condition where the SPContext.Current.Web has equality to the SPSite.RootWeb. Problems can occur when disposing RootWeb when obtained from any variation of SPContext (e.g., SPContext.Site.RootWeb, SPContext.Current.Site.RootWeb and GetContextSite(Context).RootWeb). Note the owning SPSite object must be properly disposed (or not disposed in the case of SPContext). So in you case, just do like this: private void AddTaxonomyField(SPSite site) { SPContextManager.Current.LogManager.LogInformationToDevelopers(String.Format("Adding {0} to {1}.", CorCatColumn, site.Url)); try { SPWeb rootweb = site.RootWeb //... But make sure you dispose the SPSite you pass into the AddTaxonomyField in the function that calls AddTaxonomyField UNLESS you take it from SPContext. As you get it from properties.Feature.Parent, you should not Dispose it, so I'd just do the above mentioned change. Source and Source Thanks!!! that removes my confusion and helped me lot--- The code is wrong. You need to use using directive only with objects that you created yourself. The code would be correct in this case: private void AddTaxonomyField(SPSite site) { using (SPWeb web = site.OpenWeb()) //need "using" because you opened the SPWeb yourself { } } This code would dispose of the web object at the end of using statement. This would be correct as well: private void AddTaxonomyField(SPSite site) { var rootWeb = site.RootWeb; // no need for "using" since the object is managed by SPSite. } In your example, you haven't opened a new web but instead you used SPSite.RootWeb object (and SPSite will take care of that for you). Since the SPSite.RootWeb wasn't opened (or created in simple terms) by you - therefor there is no need to use using or call .Dispose(). I believe this is your third similar question in a row :) Just note the following guidelines and it might help: using = automatic dispose; You don't dispose of something you don't own (create); You can't be sure .OpenWeb() will actually open the root web; it depends on how the SPSite had been created first. Also, why opening explicitely (which brings a performance cost), while the root web of an SPSite may already have opened (and kept) the .RootWeb property? The site.OpenWeb() is just an example. And I am not advising on opening the webs :) I just wanted to make clear that OP needs "using" keyword when creating objects by himself. Sorry for confusion. I renamed rootWeb to web, just so it's more clear.
STACK_EXCHANGE
Reference for a particular anecdote about the cultural basis of the ethics of homicide I originally posted this on psychology SE but received no response, so I am cross-posting here. It seems appropriate for philosophy SE because the cultural dependence of ethics is generally a subject of interest to philosophers of ethics. But let me know if there is a more appropriate SE site. I recently had a conversation with a friend about the psychology of ethics, and it reminded me of an anecdote that I read perhaps 10-15 years ago. The anecdote supposedly originates in the anthropology literature, but I couldn't find an anthropology SE, so I am hoping that someone here will have encountered it. I'm looking for either a reference or perhaps a similar anecdote that does have a reference. If the story can be identified, I'm also interested in how credible the story is (i.e., if there is any known criticism or dispute about it). I just want to point out in advance that I have no formal training in psychology or anthropology and that I have only read popular literature on these subjects. I recall picking up a book in a bookstore and reading most of a chapter that was devoted to the idea that most ethical concepts that modern westerners would consider "self-evident" are not universal and are certainly not considered self-evident in many other cultures. One big part of this is the in-group/out-group distinction, where many ethical rules are only considered applicable to the in-group. As an extreme example, the book related the following story, which comes from an anthropologist who studied the Inuit (or perhaps another Arctic people): There was a band of Inuit people who were out hunting in a remote area. Some tragedy occurred, and one member of the band died. (I think the tragedy was natural, not human-caused.) Everyone grieved for several hours. Then, in the night, one of the men stood up and shouted, "Shall we suffer, or shall others suffer!?" Four or five of the men got together and went off searching the wilderness for other people. They found a tiny hunting party of strangers, who were all asleep. Then they killed them all. This raised their spirits and they went back to their camp. When questioned, they did not consider this to be ethically problematic at all. (This retelling is based entirely on my faulty memory.) I was deeply skeptical of the story when I first read it, but I was also open-minded enough to take it seriously. It is trivial to find examples throughout history of atrocities that are committed against a dehumanized outside group, which lends credence to the argument that humans generally consider ethical prohibitions to only apply within one's own group. It's also conceivable that one's in-group could be small enough that strangers are automatically excluded. Please let me know if you have read this (or anything similar enough) before and can provide a reference and/or criticism. I finally had some time to look into this again, and I found the source. The most popular source of this seems to be Ruth Benedict's paper in J. General Psychology, 10, p. 59-80 (1934). Long excepts from it can be viewed without a paywall here. I think that the original source is from Franz Boas's anthropological work on the Kwakiutl (one group of Kwakwaka'wakw people) from roughly 1895-1920. For example, he published The Kwakiutl of Vancouver Island in 1909. Here is Benedict's retelling of the story: Among the Kwakiutl it did not matter whether a relative had died in bed of disease, or by the hand of an enemy; in either case death was an affront to be wiped out by the death of another person. The fact that one had been caused to mourn was proof that one had been put upon. A chief’s sister and her daughter had gone up to Victoria, and either because they drank bad whiskey or because their boat capsized they never came back. The chief called together his warriors. “Now, I ask you, tribes, who shall wail? Shall I do it or shall another?” The spokesman answered, of course, “Not you, Chief. Let some other of the tribes.” Immediately they set up the war pole to announce their intention of wiping out the injury, and gathered a war party. They set out, and found seven men and two children asleep and killed them. “Then they felt good when they arrived at Sebaa in the evening.” (My memory was pretty good, except for mixing up the Kwakiutl with the Inuit.) For some context, there is a passage from Benedict's Patterns of Culture (available here) describing why this is true in Kwakiutl culture. (As in my question, I don't take this completely at face value, but it does seem to be accepted by the anthropological community.) The Kwakiutl recognized only one gamut of emotion, that which swings between victory and shame. ... The Northwest Coast carries out this same pattern of behavior also in relation to the external world and the forces of nature. All accidents were occasions upon which one was shamed. A man whose axe slipped so that his foot was injured had immediately to wipe out the shame which had been put upon him. A man whose canoe had capsized had similarly to ‘wipe his body’ of the insult. People must at all costs be prevented from laughing at the incident. The universal means to which they resorted was, of course, the distribution of property. It removed the shame; that is, it reestablished again the sentiment of superiority which their culture associated with potlatching. All minor accidents were dealt with in this way. The greater ones might involve giving a winter ceremonial, or head-hunting, or suicide. ... The great event which was dealt with in these terms was death. Mourning on the Northwest Coast cannot be understood except through the knowledge of the peculiar arc of behavior which this culture institutionalized. Death was the paramount affront they recognized, and it was met as they met any major accident, by distribution and destruction of property, by head-hunting, and by suicide. They took recognized means, that is, to wipe out the shame. (Here, "head-hunting" refers to what modern westerners would call "murdering random people," as in the story above.) It is trivial to find examples throughout history of atrocities that are committed against a dehumanized outside group, which lends credence to the argument that humans generally consider ethical prohibitions to only apply within one's own group. It is not so trivial to find philosophers arguing like this. A philosopher would need to avoid the word "atrocities" when referring to such actions, as that label would already indicate an ethical prohibition. Examples would require outside groups to be discriminated against on a formal level, such as by racism, sexism or religion. However I am not aware of a famous philosopher arguing in general that actions that would be immoral inside one group can be moral when done to someone from any other group. In general symmetry or the "golden rule" is an important feature in ethics, meaning ethics philosophers seek principles that would also make it immoral for members of outside groups to act against the inside group. Symmetry however breaks down in very harsh conditions where each "fight for their own", meaning own survival is impossible without someone elses sacrifice, for all parties involved. Possibly that can help explain the sample culture from the question. In general though humankind needs ethics training of members of society because there is no strong natural ethical sense. Humans without ethical socialisation from parents or schools will behave less ethically, including low inhibitions for bloodshed. Not sure if any philosopher developed specific ethics for very harsh situations, though the trolley problem has a lot of parallels. But overall in such harsh conditions, humans are not likely to think much about philosophy. Sure, I am not a philosopher. 2. The word “atrocities” is spoken from my personal perspective as a modern liberal westerner. 3. You have entirely missed the point. I’m not arguing that there are well known systems of modern philosophical ethics that allow you to commit acts of violence against outsiders. I’m arguing that it is plausible that the behavior of most humans, using whatever ethical heuristics our brains and/or cultures endow us with, frequently applies ethics differently to in-groups and out-groups. The Stanford prison experiment is one famous example. If you want to talk about brains or culture, chose a psychology forum. I give an answer about philosophy, because this is a philosophy forum, and ethics is the study of problems not the non-study and follow your brain and culture. I did not miss your point, i intentionally only answered about the part that may be on-topic here. I would assert that the overwhelming majority of philosophers who study ethics think that the empirical human behavior is at least relevant to the study of ethics. I also think that it's relevant that most modern westerners have cultural or philosophical ideas about ethics that are wildly at odds with those of the Kwakiutl, according to the passages I cited. If you aren't interested in my question, just don't post an answer, rather than deliberately posting an answer to a different question. You probably should have stopped reading after the second and third sentences: "It seems appropriate for philosophy SE because the cultural dependence of ethics is generally a subject of interest to philosophers of ethics. But let me know if there is a more appropriate SE site." My answer is not for you personally, but for anyone visiting this site. Other people might prefer my answer to your question. The stackexchange guidance on answers says: "Read the question carefully. What, specifically, is the question asking for? Make sure your answer provides that – or a viable alternative." I asked for a reference on this incident and/or any academic criticism of whether it is true or not. Your answer goes off on a tangent unrelated to the question. In any case, I'm going to stop responding after this point. this sounds like an issue of law rather than a question of ethics so that when a group agree on some antisocial action it is a group dynamic but unlikely to be a group ethic ie a carefully worked out system of universal principles. social norms are not really ethical norms and if arguments are offered to justify this or that killing it usually involves practical considerations rather than ethical. Ethics usually is an individual orientation rather than a collective possibility. Norms rules and law are for the collective. Cultural is a collective image. robin I think that the overwhelming majority of philosophers would disagree with the assertion that the question of whether it is okay to kill strangers is purely legal rather than ethical. By this standard, most of the influential philosophers of western history (e.g. Plato, Spinoza, Kant, ...) would be classified as legal scholars rather than philosophers. More importantly, you didn't answer the questions, i.e., do you have a reference for this incident or any academic criticism of the story?
STACK_EXCHANGE
The following people constitute the Editorial Board of Academic Editors for PeerJ Computer Science. These active academics are the Editors who seek peer reviewers, evaluate their responses, and make editorial decisions on each submission to the journal. Learn more about becoming an Editor. Takayuki Kanda is a Group Leader at ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan. He is one of the starting members of Communication Robots project at ATR. He has developed a communication robot, Robovie, and applied it in daily situations, such as peer-tutor at elementary school and a museum exhibit guide. His research interests include human-robot interaction, interactive humanoid robots, and field trials. Ian Taylor is a Reader Cardiff University. He has a strong track record specialising in the workflow and data distribution areas, with application in audio, astrophysics and healthcare. Ian wrote a 2nd edition professional distributed computing book (sold 2000+) and was lead editor for “Workflows for eScience”. Ian has guest edited for the Journal of Grid Computing and co-chaired the OGF Workflow Management Research Group. He has published 110 papers. Educator, Researcher, and Entrepreneur. Founding Director - AI Institute, NCR Professor, and Professor of Comuter SC & Engg, University of South Carolina. Earlier, LexisNexis Ohio Eminent Scholar. Executive Director, Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis) at Wright State University. Elected Fellow IEEE, AAAS, AAAI, ACM, and AAIA. Working towards a vision of Computing for Human Experience. His recent work has focused on knowledge-infused learning and neuro-symbolic AI, semantic-cognitive-perceptual computing, and semantics-empowered Physical-Cyber-Social computing. He coined the terms: Smart Data, Semantic Sensor Web, Semantic Perception, Citizen Sensing, etc. He has (co-)founded four companies, including the first Semantic Search company in 1999 that pioneered technology similar to what is found today in Google Semantic Search and Knowledge Graph, ezDI, which developed knowledge-infused clinical NLP/NLU, and Cognovi Labs at the intersection of emotion and AI. He is particularly proud of the success of his >45 Ph.D. advisees and postdocs. Kristina Lerman is a Project Leader at the Information Sciences Institute and holds a joint appointment as a Research Associate Professor in the USC Viterbi School of Engineering's Computer Science Department. Her research focuses on applying network-based and machine learning methods to problems in social data analysis and social computing. Rebecca Wright is a professor in the Computer Science Department and Director of DIMACS at Rutgers. Her research spans the area of information security, including cryptography, privacy, foundations of computer security, and fault-tolerant distributed computing, as well as foundations of networking. She is a member of the board of the Computer Research Association's Committee on the Status of Women in Computing Research (CRA-W). Silvia Bartolucci is a lecturer working in the Department of Computer Science within the Financial Computing and Analytics Group. Prior to joining UCL, Silvia worked at Imperial College Business School as Research Associate in the Department of Finance within the Centre for Financial Technology. Silvia has a background in Theoretical Physics from Sapienza University in Rome and holds a Ph.D. in Applied Mathematics from King's College London. Olga De Troyer has a Masters degree in Mathematics and a Ph.D. in Computer Sciences. She has held research positions in industry and at universities. Since 1998 she is professor in the Computer Science Department of the Vrije Universiteit Brussel (Belgium) where she is co-director of the WISE research lab. Her research focus is on conceptual modeling formalisms and design methodologies. Over the years, the focus has moved from Database over Web systems towards Virtual Reality and Serious Games. Julio Rozas is Full Professor of Genetics at the Universitat de Barcelona (Spain), member of the executive committee of the Institut de Recerca de la Biodiversitat (IRBio-UB). ICREA Academia Researcher. Past postdoctoral fellow at Harvard University. Juergen Gall obtained a Ph.D. in computer science from the Saarland University and the Max Planck Institut für Informatik in 2009. He was a postdoctoral researcher at the Computer Vision Laboratory, ETH Zurich, from 2009 until 2012 and senior research scientist at the Max Planck Institute for Intelligent Systems in Tübingen from 2012 until 2013. Since 2013, he is professor at the University of Bonn and head of the Computer Vision Group. Dr. Carolyn Talcott is a Program Director in CSL, and SRI fellow. She has PhD's in Chemistry and Computer Science. She leads the Symbolic Systems Technology and Pathway Logic groups. She has over 25 years experience in formal modeling and analysis. At SRI, Dr. Talcott is leading research in symbolic systems biology, security protocol analysis, and formal analysis applied to embedded systems and next-generation networks. Elad Michael Schiller received his M.Sc., and B.Sc. in Mathematics and Computer Science from Ben-Gurion University of the Negev, Israel and a Ph.D. in Computer Science from the same university. His research excellence has been acknowledged by several highly competitive research fellowships from the Israeli government and the Swedish government. He is now an associate professor in the Department of Computer Science and Engineering at Chalmers University of Technology. Elad has published in top-tier venues (including PODC, DISC, OPODIS, SPAA, SRDS, ICDCN, IEEE-TMC, IEEE-TPDS and Acta Inf.). He has co-authored more than 50 conference and journal papers. He served on the program committees for several international conferences, including PODC, DISC, SSS, ICDCN and AlgoSensors. His research interests include distributed computing, with special emphasis on self-stabilizing algorithms, wireless communications and the application of game theory to distributed systems.
OPCFW_CODE
Gaze Over IP, 2011 My first NAO application, allowing to control him remotely. A web interface streams what he is looking at and hearing, he looks where we click, says what we type et does gestures according to emoticons 2011, DARPA Shredder Challenge The contest goal was to reconstruct five shredded documents. Piece segmentation and feature extraction in a graphical interface allowing to reconstruct the documents. Game development is traditionally reserved to a few studios, everything is made to move hobbyists away. I enjoyed the simple idea of hijacking such a protected system. By essence, the DS is an excellent human-computer interaction device, not expensive and nearly indestructible. So I had an obvious application to take the plunge : using it to control a robot. |2010 – Flash carts are used almost exclusively to play pirated games, so manufacturers don’t bother to make homebrew software development easier. That’s why I wanted to make a developer oriented flash cart, allowing to use a debugger and facilitating the hardware interface with other systems. Unencrypted parts of the cartridge protocol works correctly on an FPGA (Basys board from Digilent). Encrypted parts only work in simulation, since I didn’t managed to fit the algorithm in the FPGA. It should be possible, but will need to much time now.| |2010 – Making a shield to connect a DS cartridge on the Arduino. It allows to send cartridge’s programs and savegames to a computer. It can be used to play a game on a emulator, and even run your own program on the DS. However this board was developed to test the homebrew cartridge shown above.| |2009 – First application : a simple simulator and a gamepad for our 2009 robot. In the first mode, the goal was to test quickly some situations for pathfinding and avoidance. The user plays the opponent with a stylus while the real strategy code simulates our robot. The second mode allowed to receive and send CAN messages from the DS SPI port, to control the real robot.| 2010, UdeS, Neurophysiology - Introduction to neuroscience, oriented to perception and brain-computer interfaces. During this course, I reproduced a simulation of place cells. We can notably find these cells among the rats, they fire only when the animal is at a precise place. The program was developed with MATLAB. Reports and code |2010, UdeS/Robotics - Simulating a Roomba with player/stage, then adding a planning for a more efficient room coverage. Made with Arnaud A. and Renaud B.| |2009, UdeS, Robotics - A project preceding the masters. I had to integrate a global map correction to a SLAM algorithm developed at the lab (Kd-ICP). The TORO library have been used.| |2009, UdeS, Artificial Intelligence - The goal was to detect the fall of somebody with accelerometers. A dataset was provided for several people and many situations, not only including falls. We had to compare different classification methods (Bayes, neural networks, fuzzy logic). Made with Arnaud A., Frédéric S., Thomas T.| |2009, ESEO, Programmable logic - Real-time beacon recognition, developed in VHDL. Image capture, display on a VGA screen, and everything between the two.| |2008, ESEO, Power electronics - Making a class D amplifier with two MOSFET working in commutation. Made with Guillaume M.| |2008, ESEO, Java - Development of a vector graphics editor. Learned to use Swing, and best known design patterns. Made with Antoine D.| |2008, ESEO, Electronics - Infrared audio transmission. Made with Antoine D.| |2008, ESEO, Microprocessor - Developing a snake game in ARM assembly. From cross compilation to terminal display, through the serial driver etc. One my favourite project ever, I often dig it up. Made with Benjamin P.| |2008, ESEO, MATLAB - Implementing a “seam carving” algorithm with MATLAB. It is a method to resize images without shrinking the most important elements. This video explains how it works.| |2008, ESEO, C - Developing a chess game in C, ncurses interface!| |2007, ESEO, TIPE - A correction mouse turned into a curvimeter. Mixing electronics and DIY, it was surprisingly accurate! Made with Antoine D. and Samuel P.| |2007, ESEO, Maths (yes :p) - Comparing some pathfinding algorithms. Coded in C, GTK interface. Made with Frédéric S. and Marc P.| |2007, ESEO, TIPE - Programmable ball launcher! Made with Antoine D. and David L.|
OPCFW_CODE
Differences in Heroku behavior I'm trying to deploy a new staging environment with the buildpack. However, it is not working as throws error "Selenium::WebDriver::Error::UnknownError (invalid argument: can't kill an exited process)", despite I have the same configuration on both heroku machines. They have the same Firefox, Ubuntu and Geckodriver version. lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic Looking at the build log, I noticed some differences in the libraries installed (see here highlighted differences). Why are they not installing the same libraries? How can we be sure to install correctly all of them? I was able to fix the problem installing a buildpack heroku buildpacks:add --index 1 heroku-community/apt and creating an Aptfile that include the following libraries libcairo2 libcairo-gobject2 libxt6 libsm6 libice6 libgtk-3-0 HI @arsandov, I'm running into the same error, and I was hoping to confirm whether it was the same issue you had. In your highlighted differences, is the left side your original production environment, and the right side your new staging environment? Appreciate if you're able to confirm :) Hey @RishiHQ! I'm happy to help you. Actually, left side is set as production. The solution used to install the buildpack was the same than above. However, at the moment I wrote this issue we didn't notice that because it was one out dozens things we tried on production until it worked, and even after removing the "heroku-community/apt" buildpack and the "Aptfile", new deployments with "heroku-integrated-firefox-geckodriver" (left side) kept installing these dependencies without our knowledge. @RishiHQ I will update my answer. We set a new environment and again faces some issue, now I found the problem source, this buildpack doesn't install Firefox dependencies, I noticed looking at this https://github.com/thopd88/heroku-buildpack-firefox/blob/master/bin/compile#L59 so, adding the complete list in the Aptfile that is used by heroku buildpacks:add --index 1 heroku-community/apt fixed the issue. Here is the list of dependencies I added: libappindicator1 libasound2 libatk1.0-0 libatk-bridge2.0-0 libcairo-gobject2 libgconf-2-4 libgtk-3-0 libice6 libnspr4 libnss3 libsm6 libx11-xcb1 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6 libxinerama1 libxrandr2 libxss1 libxt6 libxtst6 fonts-liberation Hi @arsandov, I would like to thank you from the bottom of my heart. For some reason, I lost this thread, but on re-discovering it (with on and off debugging for 2 months), I have gotten this to work! Thank you! @arsandov thanks for posting the issue. The latest compile includes the relevant libraries needed. You can avoid installing multiple buildpacks and keep your heroku slug size to a minimum.
GITHUB_ARCHIVE
/*global beforeAll */ import type from 'of-type'; beforeAll(function () { const types = [ { type: 'String', name: 'string', value: 'hello world' }, { type: 'Number', name: 'integer', value: 10 }, { type: 'Number', name: '-integer', value: -10 }, { type: 'Number', name: 'decimal', value: 5.5 }, { type: 'Number', name: '-decimal', value: -5.5 }, { type: 'Number', name: 'NaN', value: NaN }, { type: 'Number', name: 'infinity', value: Infinity }, { type: 'Number', name: '-infinity', value: -Infinity }, { type: 'Boolean', name: 'boolean', value: true }, { type: 'Date', name: 'date', value: new Date() }, { type: 'Function', name: 'function', value: function () { } }, { type: 'null', name: 'null', value: null }, { type: 'undefined', name: 'undefined', value: undefined }, { type: 'Object', name: 'object', value: {} }, { type: 'Array', name: 'array', value: [] }, { type: 'RegExp', name: 'regExp', value: /hello/g }, { type: 'Names', name: 'instance', value: new (class Names { })() }, { type: 'HTMLDivElement', name:'div', value: document.createElement('DIV')} ]; this.loop = ({ only, except, values, callback }) => { if (type(only, Array) && type(except, Array)) throw new Error('Invalid loop jasmine helper. Either ["only"] or ["except"] property can be defined at once.'); const isValuesDefined = type(values, Array); const isOnlyDefined = type(only, Array); const isExceptDefined = type(except, Array); if (isValuesDefined) { values.forEach((value) => callback(value)); } else { types.forEach(({ type, name, value }) => { if (isOnlyDefined) { if (only.some((onlyValue) => onlyValue === name)) callback(value, type); } else if (isExceptDefined) { if (!except.some((exceptValue) => exceptValue === name)) callback(value, type); } else { callback(value); } }); } }; });
STACK_EDU
[Warning-Duplicati.Library.Main.Operation.BackupHandler-SnapshotFailed]: Failed to create a snapshot: Alphaleonis.Win32.Vss.VssObjectNotFoundException: The requested object does not exist I am recently getting this error. Snapshots have worked before. Not sure what caused it. Maybe it was an automated update of windows? Tried fixing by installing an available update of duplicati, Installed vc++ 2015 redist. Installed latest version downloaded from duplicati website. The problem is still present. what can i do? This was discussed recently - look it up using forum search. So far we konw that it seems intermittent and does not seem to cause any real problems. I am not sure if anyone has reported it as a bug yet. Are you seeing any VSS errors in your Windows Event Viewer? VSS has a habit of locking up. Have found this with other backup programs with intermittent VSS failures From an admin cmd prompt do a vssadmin list writers Check if any are in an error state or not responding. Link below explains better You can also reboot but do a full restart hold shift-key shutdown as the normal Win10 hyper sleep restart wont fix the writers. 36200: Acronis Backup: VSS Troubleshooting Guide has other ideas which might be adaptable… I’ve had some intermittent VSS errors (not this one) that began after update to Windows 10 1909. Duplicati version didn’t change. I never figured the error out. I haven’t pursued it heavily though… Thank you all for your input. I did find the other post prior to posting, that’s where i got the idea of installing the c++ redist. package. I don’t see any VSS errors in windows log at the time of the errormessage in duplicati. (not sure what they would look like though) vssadmin list writers returns all as state: stable, last error: no error. I did a full restart anyway, with no result. i installed 2015-2019 c++ redist. package from Download Visual Studio 2019 for Windows & Mac other tools and frameworks , did not resolve the problem. acronis vssdoctor did find a misconfigured vss shadow storage, but fixing that has not resolved the problem. It does cause files not being backed up: outlook pst files when outlook was open during backup for example. Every run of the backup has a warning, which is bad, as I get the habbit of ignoring the warnings. I believe they are in the “Application” section of Event Viewer. Look for events with “VSS” as the source.
OPCFW_CODE
Another key feature of Delphi is its support for . Exceptions make programs more robust by providing a standard way for At run time, Delphi libraries raise exceptions when something goes wrong (in the run-time code, in a component, or in the operating system). From the point in the code at which it is raised, the exception is passed to its calling code, and so on. Ultimately, if no part of your code handles the exception, the VCL handles it, by displaying a standard error message and then trying to continue the program by handling the The whole mechanism is based on four keywords: try Delimits the beginning of a protected block of code. except Delimits the end of a protected block of code and introduces the exception-handling statements. finally Specifies blocks of code that must always be executed, even when exceptions occur. This block is generallyused to perform cleanup operations that should always be executed, such as closing files or database tables, freeing objects, and releasing memory and other resources acquiredin the same program block. raise Generates an exception. Most exceptions you'll encounter in your Delphi programming will be generated by the system, but you can also raise exceptions in your own code when it discovers invalid or inconsistent data at run time. The raise keyword can also be used inside a handler to re-raisean exception; that is, to propagate it to the next handler. Exception handling is no substitute for proper control flow within a program. Keep using if statements to test user input and other foreseeable error conditions. You should use exceptions only for abnormal or unexpected situations. The power of exceptions in Delphi Consider this code, which Screen.Cursor := crHourglass; // long algorithm... Screen.Cursor := crDefault; In case there is an error in the algorithm (as I've included on purpose in the TryFinally example's event handlers), the program will break, but it won't reset the default cursor. This is what a try / finally block is for: Screen.Cursor := crHourglass; try // long algorithm... finally Screen.Cursor := crDefault; end; When the program executes this function, it always resets the cursor, regardless of whether an exception (of any This code doesn't handle the exception; it merely makes the program robust in case an exception is raised. A try block can be followed by either an except or a finally statement, but not both of them at the same time; so, if you want to also handle the exception, the typical solution is to use two nested try blocks. You associate the internal block with a finally statement and the external block with an except statement, or vice versa as the situation requires. Here is the skeleton of the code for the third button in the TryFinally example: Screen.Cursor := crHourglass; try try // long algorithm... finally Screen.Cursor := crDefault; end; except on E: EDivByZero do ... end; Every time you have some Handling the exception is generally much less important than using blocks, because Delphi can survive most exceptions. Too many exception-handling blocks in your code probably In the exception-handling statements shown earlier, you caught the EDivByZero exception, which is defined by Delphi's RTL. Other such exceptions refer to run-time problems (such as a wrong dynamic cast), Windows resource problems (such as out-of-memory errors), or component errors (such as a wrong index). Programmers can also define their own exceptions; you can create a new inherited class of the default exception class or one of its inherited classes: type EArrayFull = class (Exception); When you add a new element to an array that is already full (probably because of an error in the logic of the program), you can raise the corresponding exception by creating an object of this class: if MyArray.Full then raise EArrayFull.Create ( 'Array full' ); This Create constructor (inherited from the Exception class) has a string parameter to describe the exception to the user. You don't need to worry about destroying the object you have created for the exception, because it will be deleted automatically by the exception-handler mechanism. The code presented in the previous excerpts is part of a sample program called Exception1. Some of the routines have been slightly modified, as in the following DivideTwicePlusOne function: function DivideTwicePlusOne (A, B: Integer): Integer; begin try // error if B equals 0 Result := A div B; // do something else... skip if exception is raised Result := Result div B; Result := Result + 1; except on EDivByZero do begin Result := 0; MessageDlg ( 'Divide by zero corrected.' , mtError, [mbOK], 0); end; on E: Exception do begin Result := 0; MessageDlg (E. Message , mtError, [mbOK], 0); end; end; // end except end; When you start a program from the Delphi environment (for example, by pressing the F9 key), you'll generally run it within the debugger. When an exception is In the case of the Exception1 test program, however, this behavior will confuse a programmer not well aware of how Delphi's debugger works. Even if the code is prepared to properly handle the exception, the debugger will stop the program execution at the source code line If you just want to let the program run when the exception is properly handled, run the program from Windows Explorer, or temporarily disable the Stop on Delphi Exceptions options in the Language Exceptions page of the Debugger Options dialog box (activated by the Tools Debugger Options command), shown in the Language Exceptions page of the Debugger Options dialog box shown here (as an alternative you can also disable the debugger): In the Exception1 code, there are two different exception handlers after the same try block. You can have any number of these handlers, which are evaluated in sequence. Using a hierarchy of exceptions, a handler is also called for the inherited classes of the type it refers to, as any procedure will do. For this reason, you need to place the broader handlers (the handlers of the Another important element of the previous code is the use of the exception object in the handler (see on E: Exception do ). The reference E of class Exception refers to the exception object passed by the raise statement. When you work with exceptions, remember this rule: You raise an exception by creating an object and handle it by indicating its type. This has an important benefit, because as you have seen, when you handle a type of exception, you are really handling exceptions of the type you specify as well as any descendant type.
OPCFW_CODE
What is the memory utilization for the "import" and "from import" scenarios in Python? I am trying to understand the memory utilization in the following module import cases: Let there be a module called myfile.py containing //myfile.py var = 'Hello World' Case 1: import myfile myfile.title Case 2: from my file import title title Thank you! Importing the module doesn't waste anything; the module is always fully imported (into the myfile.modules mapping), so whether you use import myfile or from myfile import title makes no odds. The only difference between the two statements is what name is bound; import myfile binds the name myfile to the module (so myfile -> myfile.modules['myfile']), while from myfile import title binds a different name, title, pointing straight at the attribute contained inside of the module (so title -> myfile.modules['myfile'].title). The rest of the myfile module is still there, whether you use anything else from the module or not. There is also no performance difference between the two approaches. Yes, myfile.title has to look up two things; it has to look up myfile in your global namespace (finds the module), then look up the attribute title. And yes, by using from myfile import title you can skip the attribute lookup, since you already have a direct reference to the attribute. But the import statement still has to do that work, it looks up the same attribute when importing, and you'll only ever need to use title once. If you had to use title thousands of times in a loop it could perhaps make a difference, but in this specific case it really does not. The choice between one or the other then, should be based on coding style instead. In a large module, I'd certainly use import myfile; code documentation matters, and using myfile.title somewhere in a large module makes it much clearer what you are referring to than just title ever would. If the only place you use title is in a 'main' block to call a main() function, by all means use from myfile import title if you feel happier about that: if __name__ == '__main__': from myfile import title main(title) I'd still use import myfile there myself. All things being equal (and they are, exactly, in terms of performance and number of characters used to write it), that is just easier on the eye for me. If you are importing something else altogether, then perhaps performance comes into play. But only if you use a specific name in a module many times over, in a critical loop for example. But then creating a local name (within a function) is going to be faster still: import myfile def title(): localname = somemodule.somefunctionorother while test: # huge, critical loop foo = localname(bar) Thank you for such splendid elaboration & explanation! Could you please have a look at this: http://stackoverflow.com/questions/21965009/does-reloading-a-module-changes-the-names-in-the-module-previously-imported-relo Thank you! @user2961121 - I added an answer with more detail and examples. It's essentially identical. In both cases, the entire myfile module needs to be loaded, no matter what parts you use. Neither version will create unnecessary copies, too, so if myfile.title is something huge, neither version will fill your memory with copies of myfile.title. So both the cases copy the whole module into the file that is importing? @user2961121: Neither case makes a copy. No matter how many modules import myfile within a single program, only one myfile module will be loaded for all modules to share. Ok, then what is the performance implication of qualifying the module name with attribute name to fetch it (Case 1) and directly fetching it (Case 2)? I am sorry if the questions are stupid, am a novice :( @user2961121: Essentially none. It is very slightly faster to refer to title than myfile.title, but almost never by enough to make a meaningful difference.
STACK_EXCHANGE
How does a server validate the Certificate Verify message in SSL/TLS? Client authentication may be used in a SSL/TLS negotiation. For this, the client will send a CertificateVerify after the server requested it. The CertificateVerify message contains the client certificate that will be verified by the server. How does the server verify that the client certificate (containing the client public key) is legitimate? In the same way was any other entity verifies any other certificate in PKI. It checks if the certification path is signed with a trusted signing certificate. What exactly are you asking about? Actually the client sends three messages: Certificate contains its cert, with chain cert(s) if applicable which it usually is; ClientKeyExchange; and CertificateVerify contains a signature of the transcript so far using the client (private)key. The cert itself is verified in the standard X.509/PKIX way, and the CertVerify is verified using the key in the cert. See the RFCs and/or Wikipedia. The server has some roots of trust, which it uses, or depending on the application, it may have a CA's cert, or just that client's cert, pinned. Anyway, it either goes through its trust store and checks if the client cert is signed by something in its store, or if it's pinned, it will just check against the one CA or cert it is configured to check with. The handshake part of the TLS 1.3 protocol has three goals: exchange certificates; let the server confirms that the client really have the secret key associated with the provided public certificate, without exchanging the secret key; exchange ephemeral keys. Part 1 - Trust of certificate Client sends its certificate with Certificate message. Server determines if the certificate is from trusted source. It verifies the signature of the client's certificate, then the signature of each intermediate certificate, until it finds a trusted certificate, either from a server-side list of trusted certificates, or from a trusted certificate authority (CA). Pseudo-code: Alice (client) sends her public certificate to Bob (server) as well as the certificate chain. Bob hashes the certificate. Bob decrypts the certificate using the upper-level certificate in the chain. Bob compares the two results; if they match, Bob has the proof that the certificate was really signed using the upper-lever certificate. Bob continues through the chain (steps 2, 3, 4) until it finds a trusted certificate. Part 2 - Trust of client The client sends the Certificate Verify message: struct { SignatureScheme algorithm; opaque signature<0..2^16-1>; } CertificateVerify; The signature scheme tells hash function used and signature algorithm. The signature is produced by the client and verified by the server. The data actually signed is known by client and server and thus not re-sent (it's spaces, a context string, a zero byte and the previous messages). Pseudo-code: Alice (client) generates an asymetric key pair. A trusted authority signs her public key, producing a public certificate. Alice hashes the data. Alice encrypts the hash using her encryption key (her private key). Bob (server) knows, from a previous message: Alice's public certificate and the certificate chain. Alice sends to Bob: signature, hash function and signature algorithm. Bob hashes the data. Bob decrypts the signature using Alice's public certificate. Bob compares the two results; if they match, Bob has the proof that the signature is associated with the data and Alice's private key generated the signature. Now, Alice must keep her key secret, and the data must vary between requests to avoid Eve from replaying the request with same data and same signature. I hope it helps you to better understand. References: http://www.garykessler.net/library/crypto.html#why3 https://tlswg.github.io/tls13-spec/draft-ietf-tls-tls13.html https://nodejs.org/api/crypto.html#crypto_class_sign https://www.tutorialspoint.com/cryptography/cryptography_digital_signatures.htm Great explanation of signature verification and as it applies to SSL/TLS, thank you. :) I realize the question could've been worded better, but the OP's question title is about "validation" (not verification); also in the description: "how does the server verify that the client certificate is legitimate?". Thank you @Sas3. I updated my answer to better explain validation and verification. It helped me to solve the general question: how can Alice give a proof of possession without giving the secret to Bob?
STACK_EXCHANGE
Amazingnovel My Vampire System – Chapter 1156 A special gif green wasteful suggest-p2 Boskernovel – Chapter 1156 A special gif desk warm read-p2 Novel–My Vampire System–My Vampire System Chapter 1156 A special gif plot knock For MVS fine art and up-dates abide by on Instagram and Myspace: jksmanga Lastly the audience was out. My Vampire System Naturally Quinn was only speculating, but other than that he didn’t really know what they were aiming to do. Potentially Logan could have been capable of drop some lighting after he acquired acquired additional information. Once on the inside, Logan quickly found the primary tier beast which was becoming experimented on, showing him that he was indeed during the right bedroom. During his time listed here, he didn’t desire to abandon any natural stone unturned, but he thought if his time was limited it may be greatest to reach it is important primary. Sooner or later, he achieved the vicinity where the said monster was intended to be. Using his spiders they could squeeze via the smaller gap in the bottom. Not surprisingly Quinn was just guessing, but besides that he didn’t really realize what these were trying to do. Possibly Logan could have been in the position to drop some light the moment he acquired secured more details. ‘Still if you can, it could be wonderful to seize this monster all at once, and hopefully you can use its system straight away. You will have the issue to discuss using the Earthborn group, but when at that time a new Superior commander has become preferred, we will maybe use Sach to acquire them to endure decrease.’ ‘Are all the entrances bolstered on account of what’s in?’ Logan been curious about, remembering what Quinn acquired informed him about his nightly escapade. He went around seeking the specific door that Quinn had moved into from before. ‘Are they attempting to make another Demi-G.o.d level beast? Or will they decide to go even beyond that?” My Vampire System Outside the most important bottom, Quinn was hanging around with Sil, s.h.i.+ro, and Layla. Working with some his Qi ability, Quinn surely could work out the crazy Qi that had been impacting on her human body, enabling Layla’s natural Qi to fuse together with her hurt microscopic cells making it possible for her to begin healing. what kind room has no doors ‘Are they aiming to utilize this info for their gain? If they send it down the same tubing then perhaps it will probably be too evident. Though mailing it to 1 adjacent to it, possibly once the monster is performed doing damage to the mechs, it would move to look for beasts just as before.’ He checked for just about any indication of others in, scientist or others, but apparently there have been only some beasts. Placing his fretting hand about the easy access policy, it got a few moments just before he was allowed in. Logan eventually left some of his spiders during the hallway, operating as sentries to tell him if someone wished to enter into. The statistics that came with it, weren’t even near his gauntlets. Simultaneously he was no become an expert in swordsman, but maybe at some point it would be useful to coach inside the sword, which would probably be appropriate in these kinds of situations. Right before leaving behind, Sil acquired handled Colonel Longblade, having an element of his ability. He looked over the youngster, and through now most of the people ended up mindful of who he was depending on the outline, therefore, the Colonel didn’t say nearly anything and enabled for doing it to happen. theodore boone the fugitive read online free The stats that was included with it, weren’t even near to his gauntlets. Simultaneously he was no expert swordsman, but maybe eventually it might prove useful to coach during the sword, which would probably be appropriate in these kinds of conditions. Finally it checked like Longblade was done together with his meeting, when he arrived using a formidable group of five gents, each one armed towards the optimum in substantial tier monster equipment. Nathan, was another 6th individual who experienced have them, who got listened in about the conference for their consultant. My Vampire System Lastly the group was out of. Just before causing, Sil acquired handled Colonel Longblade, taking a component of his energy. He checked out the kid, by now plenty of people were actually concious of who he was in line with the explanation, therefore, the Colonel didn’t say a single thing and enabled for doing it to take place. In the mean time, Logan was trying to complete his personal objective that was a.s.agreed upon to him. Standing upright outside of the research laboratory, he could see that they were inside of a speed to obtain the monster moved because of one of many exclusive pipes. ‘I a.s.sume it’s because Environmentally friendly isn’t really much of a fighter. ‘Bucky’ along with the V lady are right here. Can it be which not all are as formidable as him? Or managed they simply make additional just one at the rear of as a defense?’ Longblade wondered, but after having knowledgeable the effectiveness of Leo’s pupil he was happy which he came along. When looking through every piece of information, Logan had obtained a roadmap to everyone the labs where they were jogging related tests around the beasts. But one that experienced trapped probably the most recognition was one which was labeled Humanoid – Popular level monster. The stats that came with it, weren’t even close to his gauntlets. While doing so he was no excel at swordsman, but maybe some day it may well come in handy to exercise within the sword, which would most likely be useful in these kind of situations. ‘It might appear to be camouflaging who I honestly am was the proper relocate, also it establishes that does not everybody in the Cursed faction is a grouping of poor people.’ “It appears as though there are way more laboratories similar to this an individual with some other beasts, and they only store the details for each monster in a certain clinical. It doesn’t seem like they have been keeping track of this for too long. But there is a very important factor that hobbies me, at one of the data.’ My Vampire System At last it appeared like Longblade was done along with his assembly, as he came out with a formidable staff of five men, each one armed towards the max in large tier monster equipment. Nathan, was a further 6th person who obtained come with them, who got listened in on the getting together with since their associate. ‘Are they attempting to make another Demi-G.o.d level monster? Or would they want to go even beyond that?” He then looked at who was from the Cursed faction, and noticed that the Natural green son and among the V were left out. dragon blooded war god During his time here, he didn’t need to make any material unturned, but he considered if his time was minimal it may be most effective to go to it is essential first. Sooner or later, he gotten to the region the location where the mentioned monster was intended to be. Working with his spiders they had the ability to squeeze with the modest gap in the bottom. In the meantime, Logan was planning to full his personal intention which had been a.s.signed to him. Status outside the clinical, he could see they were inside of a dash to obtain the beast transferred right down to among the list of special tubes. With the Inspect proficiency, Quinn was anxious that maybe there was clearly some form of curse place on the tool, but he was just even more amazed at what he could see. He got also overlooked for a second, that this one finding the gift item wasn’t him, but ‘Bucky’.
OPCFW_CODE
using Hevadea.GameObjects.Tiles; using Hevadea.Utils; using Microsoft.Xna.Framework; namespace Hevadea.GameObjects.Entities { public partial class Entity { public Vector2 Position => new Vector2(X, Y); public Tile GetTileOnMyPosition() { return Level.GetTile(GetTilePosition()); } public Coordinates GetTilePosition() { return new Coordinates((int)(X / Game.Unit), (int)(Y / Game.Unit)); } public Coordinates GetFacingTile() { var dir = Facing.ToPoint(); var pos = GetTilePosition(); return new Coordinates(dir.X + pos.X, dir.Y + pos.Y); } } }
STACK_EDU
So, I work as a programmer. Until pretty recently I was working on machine learning, which is really fun and interesting. One thing I like about machine learning is – it’s important (and fun!) to actually spend time with your data manually and understand it and look at individual things. But, ultimately, they did not hire me to do manual work! One week I remember thinking “right, my job is to build systems that accurately classify millions of things, not to look at those things manually.” So the reason programmers sometimes get paid a lot of money, I think, is because we can build systems that leverage computers to do an unreasonable amount of work. If you build Gmail’s spam system, you can remove spam from the inboxes of millions of people! This is kind of magical and amazing and it’s worth all of the bugs and dealing with computers. But it takes a long time! Basically anything interesting that I work on takes, let’s say, 2-6 months. And it’s not too weird to work on projects that take even longer! One of my friends worked on the same thing for more than a year. And at the end he’d built a system for drawing transit maps that’s better than Google’s. This was really cool. So this means you can really only do a few things. And if one of those things doesn’t work out then that means that like a quarter of your work for the year is gone. This is okay, but it means it’s worth being thoughtful. And the more time I spend programming, the more time I see that it’s actually super hard to figure out what would be important to work on. Like, sure, I can make a computer do a billion things (literally! That’s pretty easy!), but which billion things exactly? What will have a lot of impact? What will help my company do better? Once, a little while after I started at my current job, I told my manager “hey, I’m thinking of doing $thing”. He said “ok, what if you do $other_thing instead?” So I built the first version of the thing he suggested (a small system for making it easier to keep track of your machine learning experiments), and two years later it’s something that the team still uses and that a bunch of other people have built on top of. It turns out that it was a good idea! When I started programming, I thought that people would tell me what code to write, and then I would write that code, and then that would be all. That is not how it’s been, even though certainly I get guidance along the way. I work for a place that gives its engineers a lot of autonomy. So instead, for me, it’s been more like: - well we have this one long-term goal, or three, or six - also a bunch of minor problems of varying urgency - now it’s up to you to figure out which ones would be good to solve right now - also you have to figure out how to solve them - also the problems might be impossible to solve - and there are all these other external factors - you get to talk to a bunch of people who have thought about these problems for a while to do it though! - here’s 40 hours a week. go. know what your goals are So, how do you decide what to do? I have a coworker Cory Watson who gave this cool talk at Monitorama called Creating a Culture of Observability. He describes what he’s doing as follows on that page: In other words, if our sensors — think about metrics, logs and traces — are good, then we can learn about how effectively our systems are working! My job at Stripe is to make this fucking awesome. It is kind of obvious when working with Cory that he is relentlessly focused on making it easier to know what our software systems are doing. And it helps! The company’s dashboards and metrics have gotten way better as a result. It’s easier to make performance improvements and detect and understand errors. My friend Anton who made that transit maps app, cares SO MUCH about how to represent public transit information and he thinks about it all the time so it’s not that surprising to me that he’s built an awesome way to do it. I think this kind of focus is incredibly helpful – when I don’t have a clear goal, I find it really really hard to get things done or decide what to do. I think of this as kind of the “can I explain my job to someone at a party?” test. When I can’t pass this test (especially if the person at the party is a software engineer) I feel uncomfortable. Obviously you don’t need to always focus on the same thing (jeff dean is like a legend at Google or something and I think he’s done a ton of different thing), but having a focus seems really important. coming up with a focus is not that easy At work there are a lot of possible things to think about! And as a single person (not a manager), there’s only so much you can focus on at a time. Some things I see people working on: - Our storage systems are super-reliable and easy to use - It’s easy to tell what your code is doing, in real time - Make the development experience really good and easy - Make the dashboard an awesome place for our users to understand their business So somehow I need to find a thing that is big enough and important enough to focus on (can i explain to my colleagues why i’m doing what i’m doing?), but also small enough that a single person (or small group) can make progress on it. And then it is way easier to write code towards that vision! there’s no one “right thing” I originally called this post “how do you work on the right thing?” I retitled it because I think that that’s a wrong (and kind of dangerous) wording – there is no one right thing to work on. I work with many many excellent people who are working on many many important things. Not all things are equally impactful (which is what this post is all about!), but it’s about reliably finding useful things to work on that are within your capabilities, not finding a global optimum. If I only wrote globally optimal blog posts I would literally never publish anything. believe it’s possible One thing about working on long-term or ambitious projects is – you have to believe that you can do the project. If you start a cool year-long project, approximately 50 million things will go wrong along the way. Things you didn’t expect to break will break. And if you give up when you have a bad week or three weeks or somebody doesn’t believe that what you’re doing is right, you will never finish. I think this is a really important thing a mentor / more senior person can do for someone more junior. A lot of the time you can’t tell what’s possible and what’s impossible and what obstacles are fine and what obstacles are insurmountable. But this can be bootstrapped! If someone tells you “don’t worry, it’ll all work out!”, then you can start, and hit the problems, and ask for advice, and keep going, and emerge victorious. And once you have emerged victorious enough times (and failed enough times!), you can start to get a sense for which things will work and which things will not work, and decide where to persevere. People talk a lot about ‘agile’ and MVPs but I don’t think that’s a complete answer here – sometimes you need to build a big thing, and you can write design docs and prototypes, but ultimately you need to decide that damnit, it’s going to work, and commit to spending a long time building it and showing intermediate progress when you can. Also your organization needs to support you in your work – it’s very hard to get anything done if the people around you don’t believe that you can get it done. I’m not in undergrad anymore I loved being a math/CS undergrad. My professors would give me a series of challenging assignments which were hard but always within my abilities. I improved gradually over time! It was so fun! I was awesome at it! But it is over. Being employed is more like – I have a series of tasks which range from totally trivial to I-don’t-even-know-where-to-start and I need to figure out how to interrogate people and build up my skills so that I can do the hard things. And I need to decide what “good enough” means for the things I do decide to do, and nobody will do it for me, not really. There’s an interesting comment by Rebecca Frankel that Dan Luu pointed me to, on this post I agree with Steve Yegge’s assertion that there are an enormously important (small) group of people who are just on another level, and ordinary smart hardworking people just aren’t the same. Here’s another way to explain why there should be a quantum jump – perhaps I’ve been using this discussion to build up this idea: it’s the difference between people who are still trying to do well on a test administered by someone else, and the people who have found in themselves the ability to grade their own test, more carefully, with more obsessive perfectionism, than anyone else could possibly impose on them. So somehow working on an important thing and doing it well means you have to decide what your goals are and also build your own internal standards for whether or not you’ve met them. And other people can help you get started with that, but ultimately it’s up to you. some disconnected thoughts that feel useful - Maggie talked about “postmortem-driven development” – look at things that have broken several times! see if you can help them not break again! - It’s normal (and important!!) to do experiments that fail. Maybe the trick is to timebox those experiments and recognize when you’re doing something risky / new. I don’t know! I feel weird admitting that I really struggle with this, but I really struggle with this. I do not always have good ideas about what to build. Sometimes I have ideas that I think are good and I do them and they’re great, and sometimes I have ideas and I do them and they’re… really not great. Sometimes I have standards for my work that I cannot figure out how to meet and that’s really frustrating. Sometimes other people have ideas and I think they’re great and help build those ideas and it’s amazing. That’s a really good feeling. So far the best things I’ve worked on have been other people’s ideas that I got excited about. Sometimes other people have ideas and I don’t understand what they’re talking about for months until they build it and I’m like OH THAT IS REALLY COOL WOW WOW WOW. Even reliably recognizing good ideas is hard! - Data-Driven Products Now! is a talk by Dan McKinley about how to think about building consumer-facing web products. - The Secret to Growing Your Engineering Career If You Don’t Want to Manage (thanks to Emil Sit) - The Highest-Leverage Activities Aren’t Always Deep Work Thanks to Emil Sit, Camille Fournier, Kyle Kingsbury, Laura Lindzey, Lindsey Kuper, Stephen Tu, Dan Luu, Maggie Zhou, Sunah Suh, Julia Hansbrough, and others for their comments on this.
OPCFW_CODE
Demonstration on how to replace the self-signed certificate on VMware vCenter. Having valid certificates is not only crucial today and going forward, it has been crucial for the last few years as well. Having valid certificates not only ensures that a certain security posture being maintained, it removes any unsightly certificate warnings that make various products unfriendly to use for the administrators/engineers/architects. I recently made a transition from Nutanix Community Edition (CE) to VMware vSphere in my home lab due to upgrade issues with the most recent release of CE. VMware vSphere 7.x and above resolved an issue where the NIC in an Intel NUC 10 was not detecting during installation and the driver needed to be sideloaded before CE could be installed. This is a continuation of my blog series where I take a focus in on security from a virtualization standpoint. Here is a similar themed blog about how to replace the self-signed certificate in Nutanix Prism Element and Prosim Central. Today we will talk about how to replace the certificate on vCenter and how significantly easier it has become to do so. Before I start, I am going to preface this that process only applies to VMware vCenter 7.0 and above at the time of this writing. If folks are still running a vCenter 6.5 or 6.7 this will not work there as the process is completely different. Also this not only affects Citrix, it affects VMware Horizon and any other solutions that integrate into vCenter. How many of us have in the past or even today check the box on this message to acknowledge and trust the self-signed certificate in an on-prem or cloud based full Citrix Studio? Most of us probably click through it without second thinking why the warning applies or also just wave it off as “that is not my problem and it is the vSphere team’s problem”. While it may be the vSphere teams problem, security should be a concern from all IT folks as there are always ways that system compromises can easily be fixed if there was a security first mentality. In addition to this, replacing the certificate will remove the warning from vCenter when folks use the vCenter web console. In vCenter 7.0 and above it is very easy to replace the certificate so that the warning never even pops up when establishing the Hosting connection string from Studio. First we will need to create a certificate, in my case I will be using a domain certificate authority (CA). A certificate from a 3rd party well trusted CA can also be configured in this manner as well. I find it easier to generate the CSR on the vCenter and later will have some interesting issues from generating the CSR elsewhere. Go to vCenter and login as firstname.lastname@example.org (this is the only account that has permissions to change the certificate management) On the Top, go to Menu -> Administration On the left pane -> Click Certificate Management Under Actions -> Click Generate Certificate Signing Request (CSR) Fill out the information appropriately -> Click Next Copy or Download the CSR -> Click Finish Open a browser and go to https://domainca.fqdn.com/certsrv replacing with your domainca FQDN. In my case it is domain1.domain.lab. -> Click Request a Certificate Click Advanced Certificate Request Click Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file, or submit a renewal request by using a base-64-encoded PKCS #7 file Copy and paste the contents of the CSR file generated earlier into the large field -> Select the appropriate certificate template -> Click Submit After submitting the certificate may be pending if the CA is configured for approval (as such in my lab). Get the proper approval to issue the certificate After approval go back to https://domainca.fqdn.com/certsrv -> Click View the Status of a Pending Certificate Request Click on the Request from earlier -> Click on the Request Select Base64 encoded -> Download the Certificate Save with to a location where it can be accessed with an appropriate name –> Click Save The domain CA’s root and intermediate certificates are required to be exported as .cer as well. In my case, these can be found on the domain controller under Certificate Manager for the Local Machine -> Trusted Root Certificate Authorities Certificates. Back on vCenter -> Administration -> Certificate Management we need to import the Root and intermediate certificates so that the cert is trusted. -> Click Add Browse to the root cert -> Click Add After adding, there are now multiple Trusted Root Certificates For the Machine Cert section Click Action -> Import and Replace Certificate Select Replace with external CA certificate where CSR is generated from vCenter Server (private key embedded) as the CSR was generated on the vCenter -> Click Next On the first field -> Click Browse File and select the certificate that the Domain CA issued. On the second field -> Click Browse File and select the domain CA root certificate that was exported. If there are both root and intermediate certificates they may need to be combined in notepad –> Click Next vCenter Services will automatically restart which will take a few minutes. It is common to get this message as services are restarted. When vCenter is back and ready log back in and go to the Certificate Management section. The Machine cert should have an updated expiration date. Track that date and make sure to repeat the process again before the certificate expires to ensure everything continues to run smoothly for any services that integrate with vCenter. There also are no longer certificate warnings when going to the vSphere web client and when the certificate is viewed, it is the appropriate certificate The Hosting section in Studio connects to vCenter without a warning now as well. If you tried to generate the CSR outside of vCenter and went through the process of generating the certificate. You could get this error like I did. There really isn’t a reason why the character was invalid but this is why I recommend generating the CSR on vCenter. VMware has made it significantly easier to replace the certificate in vSphere 7.x then it was in 6.x. It makes it almost a no-brainer to do this in my opinion. We didn't need to incure any additional costs as the certificate was generated from a domain CA, but this process would work if you need to get a signed certificate from a third party CA. If we take an overall approoach of focusing in on security in each layer of the infrastructure, we significantly improve the security posture of the entire environment and eliminate as many security flaws in the environment as possible. We would like to hear from you so feel free to drop us a note if you have any questions.
OPCFW_CODE
from datetime import datetime from thunderdome import connection from thunderdome.tests.base import BaseCassEngTestCase from thunderdome.models import Vertex, Edge from thunderdome import columns # Vertices class Person(Vertex): name = columns.Text() age = columns.Integer() class Course(Vertex): name = columns.Text() credits = columns.Decimal() # Edges class EnrolledIn(Edge): date_enrolled = columns.DateTime() class TaughtBy(Edge): overall_mood = columns.Text(default='Grumpy') class TestVertexTraversals(BaseCassEngTestCase): @classmethod def setUpClass(cls): cls.jon = Person.create(name='Jon', age=143) cls.eric = Person.create(name='Eric', age=25) cls.blake = Person.create(name='Blake', age=14) cls.physics = Course.create(name='Physics 264', credits=1.0) cls.beekeeping = Course.create(name='Beekeeping', credits=15.0) cls.theoretics = Course.create(name='Theoretical Theoretics', credits=-3.5) cls.eric_in_physics = EnrolledIn.create(cls.eric, cls.physics, date_enrolled=datetime.now()) cls.jon_in_beekeeping = EnrolledIn.create(cls.jon, cls.beekeeping, date_enrolled=datetime.now()) cls.blake_in_theoretics = EnrolledIn.create(cls.blake, cls.theoretics, date_enrolled=datetime.now()) cls.blake_beekeeping = TaughtBy.create(cls.beekeeping, cls.blake, overall_mood='Pedantic') cls.jon_physics = TaughtBy.create(cls.physics, cls.jon, overall_mood='Creepy') cls.eric_theoretics = TaughtBy.create(cls.theoretics, cls.eric, overall_mood='Obtuse') def test_inV_works(self): """Test that inV traversals work as expected""" results = self.jon.inV() assert len(results) == 1 assert self.physics in results results = self.physics.inV() assert len(results) == 1 assert self.eric in results results = self.eric.inV() assert len(results) == 1 assert self.theoretics in results results = self.theoretics.inV() assert len(results) == 1 assert self.blake in results results = self.beekeeping.inV() assert len(results) == 1 assert self.jon in results results = self.blake.inV() assert len(results) == 1 assert self.beekeeping in results def test_inE_traversals(self): """Test that inE traversals work as expected""" results = self.jon.inE() assert len(results) == 1 assert self.jon_physics in results def test_outV_traversals(self): """Test that outV traversals work as expected""" results = self.eric.outV() assert len(results) == 1 assert self.physics in results def test_outE_traverals(self): """Test that outE traversals work as expected""" results = self.blake.outE() assert len(results) == 1 assert self.blake_in_theoretics in results def test_bothE_traversals(self): """Test that bothE traversals works""" results = self.jon.bothE() assert len(results) == 2 assert self.jon_physics in results assert self.jon_in_beekeeping in results def test_bothV_traversals(self): """Test that bothV traversals work""" results = self.blake.bothV() assert len(results) == 2 assert self.beekeeping in results
STACK_EDU
[COLLECTION] Scripts&Explaination [06/07] This is a collection of echo scripts with optimized values for our janice (i9070&i9070P). I will add an explanation for every tweak to make things clear. deepest_state: this allow you to set a sleep level. As far as I know, 3-4-5 states are available; major is the state, better your phone will sleep and so less battery draining (obviously while screen is off). A tip: I think that 4 state is the best, it doesn't cause troubles on the contrary of the 5 level that may causes problems, but it is stable if using ONDEMAND as governor. echo "4" > /d/cpuidle/deepest_state; dirty_expire_centisecs: In hundredths of a second, how old data must be to be written out next time a thread wakes to perform periodic writeback. As I know, permitted values are 100 to 5000, but if I'm wrong, don't esitate to correct me! echo "5000" > /proc/sys/vm/dirty_expire_centisecs; dirty_writeback_centisecs: In hundredths of a second, how often threads should wake up to write data back out to disk. echo "5000" > /proc/sys/vm/dirty_writeback_centisecs; laptop_mode: is a special page writeback strategy intended to optimize battery life by minimizing memory activity. My tip is to take it off. echo "0" > /proc/sys/vm/laptop_mode; max_ac_c: Is a CoCore kernel feature that allow to edit the chargin' values. Safe values are among 100 to 700, put an higher value may damage your battery, instead putting lower value (400 for example) it will preserve battery life. echo "400" > /sys/kernel/abb-charger/max_ac_c; mali_l2_max_reads: Set max reads that mali l2 can do. echo "48" > /sys/module/mali/parameters/mali_l2_max_reads; mali_debug_level: Disabling it, you can gain some performance improvements. echo "0" > /sys/module/mali/parameters/mali_debug_level; fsync: Fsync is an android capability that always writes data out to the memory preventing data loss. If you have a stable system, disable it and you'll have: more battery life, faster system. But be careful you may loose all of your data if your phone shuts down suddenly. echo "0" > /sys/kernel/fsync/mode; // To enable (default) echo "1" > /sys/kernel/fsync/mode; // To disable echo "2" > /sys/kernel/fsync/mode; // Dynamic (writes data only when screen is off) vfs_cache_pressure: File system cache is really more important than the block cache above in dirty ratio and dirty background ratio, so we really want the kernel to use up much more of the RAM for file system cache, this will increase the performance of the system without sacrificing performance at the application level. Lower value is better (100 is the max), so for example: echo "10" > /proc/sys/vm/vfs_cache_pressure; page-cluster: This controls the number of pages which are written to swap in a single attempt. It is a logarithmic value - setting it to zero means "1 page", setting it to 1 means "2 pages", setting it to 2 means "4 pages", etc. echo "3" > /proc/sys/vm/page-cluster; The default value of sd card cache is 128, but some various tests have estabilished that if you increase this value to 2048, you'll have improvements in write and read sd card operations. echo "2048" > /sys/devices/virtual/bdi/179:0/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:0/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:1/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:2/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:3/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:4/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:5/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:6/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/7:7/read_ahead_kb; echo "2048" > /sys/devices/virtual/bdi/default/read_ahead_kb; This should block charging when phone reaches 100% and recharge phone when is 50% (Thanks to @bobfrantic). THIS IS ONLY WORKING WITH COCORE KERNEL echo on > /sys/kernel/abb-fg/fg_cyc echo dischar=100 > /sys/kernel/abb-fg/fg_cyc echo rechar=50 > /sys/kernel/abb-fg/fg_cyc I will add more tweaks, stay tuned guys
OPCFW_CODE
Understanding and Applying Basic Statistical Methods Using R: Welcome to data analysis and statistical methods using the powerful R programming language. In this comprehensive guide, we will delve into the significance of basic statistical methods and how R can be your trusted companion in unlocking the secrets hidden in data. The Significance of Basic Statistical Methods Understanding and applying basic statistical methods is the foundation of robust data analysis. Whether you are a seasoned data scientist or just stepping into the world of statistics, grasping the essentials is crucial for extracting meaningful insights from data. Understanding the Basics of Statistics Statistics is not just about numbers; it’s a language that helps us make informed decisions. This section will walk you through the fundamental statistics concepts, laying the groundwork for your statistical journey with R. Importance of Statistical Methods in Data Analysis Explore why statistical methods are indispensable in data analysis. From hypothesis testing to regression analysis, statistical methods empower you to confidently make data-driven decisions. Overview of R Programming Language Before diving into statistical methods, let’s acquaint ourselves with R. Discover why R is a preferred choice for statisticians and data scientists, and how its user-friendly interface makes it accessible to both beginners and experts. Benefits of Using R for Statistical Analysis Uncover the advantages of using R for statistical analysis. From its extensive library of statistical functions to its vibrant user community, R stands out as a versatile tool for data exploration. Getting Started with R Embark on your journey with R by learning the basics. This section will guide you through the installation process, setting up your workspace, and executing your first lines of code. Exploring Descriptive Statistics in R Delve into descriptive statistics, a crucial aspect of understanding your data. Learn how to calculate measures like mean, median, and standard deviation using R. Conducting Inferential Statistics Using R Move beyond descriptive statistics and enter the realm of inferential statistics. This section will teach you how to draw meaningful conclusions about a population based on a sample using R. Visualizing Data with R A picture is worth a thousand words. Master the art of data visualization in R, turning complex datasets into clear and insightful graphs for better interpretation. Key Statistical Functions in R Navigate through essential statistical functions in R, from basic t-tests to advanced analyses. Gain the confidence to apply these functions to diverse datasets. Advanced Statistical Techniques in R Elevate your statistical prowess by exploring advanced techniques in R. Uncover the secrets behind regression analysis, ANOVA, and other sophisticated statistical methods. Common Challenges and Solutions Every data analyst encounters challenges. Learn about common issues in statistical analysis with R and discover effective solutions to overcome them. Tips for Effective Data Interpretation Interpreting data is an art. Acquire practical tips on how to unravel the story behind your data and compellingly present your findings. Real-world Applications of Statistical Methods in R Explore the real-world applications of statistical methods in R across various industries. From healthcare to finance, R is making waves in diverse fields. Success Stories with R in Statistical Analysis Be inspired by success stories of individuals and organizations harnessing the power of R for groundbreaking statistical analyses. Troubleshooting and Debugging in R Navigate the sometimes tricky terrain of troubleshooting and debugging in R. Learn how to identify and resolve common issues to ensure a smooth analytical workflow. R Packages for Statistical Analysis Discover the plethora of R packages designed for specific statistical analyses. Find out how these packages can enhance your capabilities and streamline your work. Conclusion: Understanding and Applying Basic Statistical Methods Using R In conclusion, mastering basic statistical methods using R opens doors to a world of data exploration and analysis. Whether you’re a novice or an experienced analyst, the power of R can elevate your statistical prowess and bring your data to life. Download: Using R for Introductory Statistics
OPCFW_CODE