Document
stringlengths
395
24.5k
Source
stringclasses
6 values
You’ve seen zombies on TV and the Internet before, but I bet you also saw people hitting them with baseball bats or using shotguns. I’m here to tell you, the only real way to kill zombies is with websites. Crazy, you wonder? Yes. Crazy true. Like my grandmother used to say, “The only way to stop a horde of people shuffling through life, moaning about inconsequential problems, and trying to infect you with their apathy is a well placed cane to the kneecaps. Well… that and ideas.” Websites are a bundle of code and images and interactivity. What better way is there to communicate ideas? HTML: A Zombified Definition Without a skeleton even the best human-resistance fighter would be a puddle of color and wasted life, and so would your web page. Hypertext Markup Language (HTML) is that skeleton. It provides the structure for your web page. It builds the relationship between the parts of your page and allows you to identify and categorize the pieces of content on your page. HTML provides this structure through tags. Dog tags identify a particular soldier in the resistance. Toe tags designate a dead body or, more likely, a future zombie. HTML tags identify a type or format of content such as a paragraph, heading, bolded text, list, and so on. Note: There are two words used in this book and throughout web development for a unit of HTML: “tag” and “element”. Tags specifically means the opening and/or closing piece of an HTML element. Whereas an element includes the opening tag, any attributes set on the opening tag, the content between the opening and closing tags, and the closing tag itself. While these terms are not interchangeable, being so closely related, they are often used interchangeably or seemingly interchangeably. For instance, when speaking about what an HTML paragraph does, the difference between the tag and element is unimportant and could even be called splitting hairs, however, when looking at them in a document they designate different things and shouldn’t be conflated. Let’s look deeper at the four pieces of an HTML element: - Opening Tag - Closing Tag The opening tag uses angle brackets (the “less than” and “greater than” signs) to designate the beginning and end of the tag. A paragraph opening tag looks like: Attributes appear between the name of the opening tag and its closing angle bracket. They are most often used to provide additional information, identification, or options—and to beat the snot out of zombies. The class attribute helps define a “class,” or group, of elements that have something in common. (They need not be the same type of element.) An opening p tag with a class attribute looks like: The content is whatever appears between the opening and closing tags. Generally this is text and/or other elements. Some elements have limitations on what elements can be inside them, depending on the element, its job, and whether it’s been tainted by the zombie contagion. Closing tags look like opening tags, except they have a forward slash (/) in front of their names. For example, the closing paragraph tag looks like: Some tags, notably img (and a few more), are called void tags and do not have closing tags. This is usually because the content of the tag resides in the opening tag itself or, as with img, in attributes. Paragraph tag example: <p class="learning">This paragraph tastes like braaains.</p> and would look like: (the background color has been added to more clearly show how the browser would render the HTML) This paragraph tastes like braaains. Night of the Living Tip:HTML5 was a major release for the HTML language. It was initially released in October 2014. It brought a wide variety of new functionality and smoother syntax to HTML. HTML4, the previous major standard, was released in April of 1998 and had only minor revisions during those 16 years. The current release is the HTML Living Standard. Sometimes you might want to emphasize some text within a paragraph. You can use the <em> tag to do this. It will show as italics in a browser. <p>Don’t forget: Zombies are <em>not</em> cuddly-wuddly</p> Don’t forget: Zombies are not cuddly-wuddly <i> tag does the same thing as the <em> tag. As of HTML5, they can be used interchangeably. <p>Nor are they good dance partners. Rigor-mortise <i>really</i> affects your flexibility.</p> Nor are they good dance partners. Rigor-mortise really affects your flexibility. Beyond italics, you may also want to bold some content to make it stand out strongly. You can use the <strong> tag for this. <p>"Look out! There’s <strong>a zombie on your head!</strong>"</p> “Look out! There’s a zombie on your head!“ <b> tag also works for this. <p>"Oh, never mind, man. That was just your haircut…<b>Don’t hurt me!</b>"</p> “Oh, never mind, man. That was just your haircut…Don’t hurt me! Night of the Living Tip:A comment in coding is a statement that’s not processed by the interpreter/compiler (i.e. the program that processes the code and follows its instructions, in the case of HTML this is the browser). These are often used to explain the programming or otherwise label it. HTML has comments just like other languages. An HTML comment starts with <!– and ends with –>. Anything between and including those two “tags” will not be shown in the browser. Throughout this book’s Try it Out sections, you’ll see comments used to give you experiments to try and other directions to help you play with and understand the code. Try it Out: Take a moment to play with these tags to make sure you understand how they work. I’ve provided two options throughout the book. Both the CodePen and HTML demos are the same, they just give you two ways to access and play with the code. CodePen allows you to edit/play with the tags in your browser while the HTML demo is a file you will need to open in a text editor to edit and then load into your browser. Download an HTML file to play with it on your own (once you’ve opened the link in your browser use “Save as” to save the HTML file to your computer): https://undead.institute/files/html/001-paragraph-italics-bold-oh-my.html (Please note: This HTML page includes some additional code to make it more compatible across browsers. For now, just worry about the paragraphs, italics, and bold. I’ll explain the other tags later.) Want to keep reading? Pick up A Beginner’s Guide to Learning HTML (and Smacking Zombies Upside the Web Development)
OPCFW_CODE
|The Three Little Pigs is actually a story about the importance of government regulations and shoddy contractors| How do you determine what kind of traffic loading the bridge will see? How do you decide what stresses your concrete or steel will experience? How do you handle cracking in the concrete? How do you know the section your designing is strong enough? These may seem like questions that should be answered by just having an education in structural engineering, but they're actually variable depending on all sorts of assumptions. You asked for concrete that can handle a compressive load of 4,000 pounds per square inch (that's called "4ksi concrete" and is pretty standard stuff, if a bit low-end) but what will actually show up? You need to protect the reinforcing steel in the concrete from the elements so it doesn't corrode. You decide to embed it far enough to be protected, but how far is that? How long does it need to last and how fast do invasive chemicals penetrate the concrete? |Knowing and following 1,600 pages of this is my job. Now you know why engineering has such a high attrition rate.| When we design a bridge, we have to "stamp the plans", which means literally stamping plan-sets and calculations with a "PE" stamp. That's a professional engineering stamp that you get when you're licensed Because each state has a different code, you have to be licensed in every state you want to work in separately Most states, if you are licensed in another one, just require some paperwork. Some states, in particular those with a lot of earthquakes (like California or Washington) will require a lot more than that. By stamping the plans, you're taking personal responsibility for the design of that bridge. Because the code is law: should something happen to the bridge, you don't necessarily have to prove that your design would work in the world, but rather that it conforms to the local code. It's pretty rare that something designed to code falls down, but it happens. In extreme events (mostly earthquakes) and in some other, rare, instances like the Tacoma Narrows bridge. The code is created after a lot of research and discussion. A full discussion of code creation is a massive topic, which I will not be covering. Instead I'll say that there are right ways and wrong ways to put together a code. Bridge code isn't necessarily done the wrong way, but it's not the right way either. Code for building design is, to me, a lot more elegant and efficient and produces a more consistent and usable code. This of course changes from country to country: I'm only knowledgeable about US code. There are various considerations when designing a code about how to lay it out, what to include, what to leave to the designer, etc... One of the important descriptions of a building code is if it's "prescriptive" or "performance based". A totally prescriptive code will tell you exactly what to do in all cases. "For spans between 45 and 50 feet, use a prestressed, concrete girder of exactly these dimensions with exactly this reinforcing pattern and exactly this..." A performance based code will tell you what is expected of the structure: "After a magnitude 7.5 earthquake, the girder can have cracks no larger than 1 inch". The problem with prescriptive codes is that they can't actually cover everything, and so are both really expensive to build to (since they have to design for worst case uses which are probably rare) and leave little guidance or direction for uncovered cases. Or just result in the under-design of situations the code-creators didn't consider. Their advantages are that they're hard to screw-up a design with and are easy to use. If boring. |If this bridge was designed to a rigid, prescriptive code, it likely would've fallen down.| The problem with performance based code is that it can be complicated to design with such little guidance, some aspects of design may not be considered when not spelled out by the code, and they open the designer up for more legal issues. The advantages are they're much more flexible and allow for more accurate and efficient designs. As well as giving a lot more guidance when working with unusual cases. (When prescriptive code tells you that there must be rebar every foot of concrete no matter what that's what you put. Performance based code would tell you that cracks can't develop that are larger than "x". Thus, when looking at an odd scenario you have guidance from the performance based code about what you're trying to accomplish, and just a bar spacing from the prescriptive code which may or may not work). No code is entirely one or the other once you get past assembling Lego models of Star Wars vehicles: they're all on a spectrum. AASHTO is more prescriptive than some of the more sophisticated codes (I understand the Japanese have a very advanced, performance based code) but it has performance based elements as well. And it's the law every bridge in the US is built to, or a modified version of it anyway.
OPCFW_CODE
cannot load such file -- rack/handler/puma My setup and the error I get an error when I start my Sinatra application with rackup and puma. My config.ru file looks like this: #\ -s puma require './controller/main.rb' run Sinatra::Application So when I now use rackup I get this error: /home/username/.rvm/gems/ruby-1.9.3-p392/gems/rack-1.5.2/lib/rack/handler.rb:76:in `require': cannot load such file -- rack/handler/puma (LoadError) I use ruby 1.9.3p392 (2013-02-22 revision 39386) [i686-linux] What I have tried so far My first thought was that I forgot to install puma, or puma is broken in some way. So I tried: puma -v puma version 2.0.1 And I start it directly with ruby: ruby controller/main.rb Puma 2.0.1 starting... * Min threads: 0, max threads: 16 * Environment: development * Listening on tcp://localhost:4567 And I found this puma issue but I didn't find a real solution. Finally my questions Why id this happening? How can I fix this? Are you using Bundler? yes Bundler version 1.3.4 There's two things I'd try first. 1) I'd sandbox the gems so they don't get mixed up with those installed by Rubygems. Remove current bundler stuff with rm -rf .bundle Gemfile.lock bin vendor and run bundle install --binstubs --path vendor. Now all the exes are in the local bin dir and all the gems in the local vendor dir. 2) Run using bundle exec, but since the binstubs command was used you can instead run bin/rackup config.ru. See if that improves things / brings back a different error. okay this work can you explain why? and can you add it as answer so I can accept it. And funny is now I can also use the global rackup Sandbox the gems so they don't get mixed up with those installed by Rubygems. Remove current bundler stuff with rm -rf .bundle Gemfile.lock bin vendor and then run bundle install --binstubs --path vendor This installs all gems into vendor/RUBY-ENGINE/VERSION/ and all executables into the bin dir. These are separate from the ones installed via the gem command, which will be system wide. Run using bundle exec, but since the --binstubs command was used you can instead run bin/rackup config.ru By using bundle exec or one of the executables from bin/ you're telling Bundler to only use the gems that it installed. If you installed Puma with Bundler then it will install the Puma handler with the Rack that Bundler installed. But, you'll probably have another version of Rack installed by Rubygems (via gem install rack -r) that doesn't have the handler. To get the right one, sandbox your project's gems and always run stuff from the bin/ directory. If you need the ruby command then use bundle exec ruby… and Bundler will load the correct gems for the project. I do this with every project now and only install gems via gem install… if I need them system wide. It also makes sure you don't miss any gems out of the Gemfile because you had them already available on your system - no nasty surprises on deployment! Try to be sure you have require "rack/handler/puma" This is one that Rack::Handler::Puma.run needs. Play with this http://gabebw.com/blog/2015/08/10/advanced-rack
STACK_EXCHANGE
Indy SMTP hangs or freezes up on Connect method I have a client attempting to connect SMTP server. I have the OnStatus event linked to the smtp client and see the Resolving / Connecting / Connected states. But sometimes there is a hangup / application freezes when trying to connect. I see the Connected state being raised from OnStatus though. What could the issue be. I ruled out the Resolving DNS ans et both ConnectTimeout and Readtimeout settings on smtp as shown here: smtp.OnStatus := SMTPStatus; smtp.ConnectTimeout := 10000; smtp.ReadTimeout := 10000; smtp.Connect; // SOMETIMES MY LOG DOES NOT GET HERE Log('AfterConnect'); if smtp.Connected then begin smtp.Send(Mess); smtp.Disconnect; end On about 600KB attachment it seems to be getting stuck on encoding the attachment part and never completes, currently the encoding type is the default one. 10/3/2012 10:21:43 AM Status: Resolving hostname XXXXXXXXXX.com. 10/3/2012 10:21:43 AM Status: Connecting to <IP_ADDRESS>. 10/3/2012 10:21:44 AM Status: Connected. 10/3/2012 10:21:45 AM Status: Encoding text 10/3/2012 10:21:45 AM Status: Encoding attachment If the OnStatus event is reporting hsConnected then you are physically connected to the server. If Connect() is not exiting afterwards then it is likely blocked waiting for data from the server that is not arriving, such as the server's initial Greeting. The ReadTimeout should be handling that possibility, though (unless you have an OnConnected event handler assigned that is becoming deadlocked, that is). Use a packet sniffer, such as Wireshark, to make sure that you are actually connecting to the server you are expecting and that it is sending the right kind of greeting data that TIdSMTP is expecting. Yes, whenever you have network communication problems, you should always sniff packets to rule out comm errors before then tracking down coding errors. That has nothing to do with Connect() freezing up. That is a completely separate issue. Indy encodes emails dynamically as they are being transmitted. So either Indy has became deadlocked while reading the attachment data, Indy has a bug that crashed/deadlocked the encoding process, or the SMTP server has stopped receiving data on its end so Indy's socket gets blocked when the outgoing buffer fills up. So what would be the work around for this case, will it make sense to have a timer inside a thread that frees the smtp client causing a forceful termination? No, you need to track down what is actually freezing up during the sending/encoding and fix it (or, if it is a bug inside of Indy, let me know so I can fix it). You can compile Indy for debugging and then step through its source code in the debugger. I can probably put logging in Indy code, do you know what areas of the code could be the potential for such a freeze? We use D2007 with Indy 10 This is too broad an area to pinpoint spots of interest for you. That is why I suggest you actually step through the code. There is a lot of code involved when encoding an email. Do you have the same freezing if you use one of the TIdMessage.SaveTo...() methods instead of the TIdSMTP.Send() method? Or is this strictly an SMTP-only freeze? Thanks for the replies. I was under impression that IdSmtp control does create a Thread but after debugging I don't see a thread being created. Did I miss something or is it the fact that there is currently no additional thread being created when using TIdSmtp? No, TIdSMTP does not create an additional thread. Like most other Indy clients, it runs in the context of the thread that is using it. The only Indy clients that create new threads for their internal work are TIdTelnet, TIdCmdTCPClient, and TIdIPMCastClient. Many SMTP server are configured to delay the initial greeting msg by 30 or so seconds to try and deter SPAM, Also most servers can be configured to reject connections from the same ip addresses if it has tried to connect multiple times within a specified time period (typically 1 Min). It could be this that is causing you issues. If that were the case, TIdSMTP would not be reaching the encoding stage, as shown in Dmitry's log. TIdSMTP has to validate the server's greeting before it can do anything else. @Remy I added this answer a long time ago before the question was edited to include the log and more information. It was my best guess based on the available information. sorry, I didn't notice the timestamp.
STACK_EXCHANGE
The new processing algorithms that have been developed by the NASA IUE Project allow several significant improvements in the processed data. The new approach exploits the presence of fixed pattern noise (pixel-to-pixel sensitivity variations in the cameras) as a reliable fiducial to register the raw science image with the raw Intensity Transfer Function (ITF) image. Proper registration of IUE images is crucial to accurate photometric correction because the variability of the geometrical distortions introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the ITF. While reseau marks etched on the faceplates of the cameras were intended to be used to rectify geometrically the science images, they cannot be detected at the low exposure levels usually found in the background of IUE images. Therefore, the IUESIPS method of processing IUE images uses predicted reseau positions to align the science images with the ITF images. Unfortunately, these mean positions are poorly known and the application of a mis-registered ITF (by more than about 0.2 pixel) manifests itself as systematic noise in the photometrically corrected image, and ultimately in the spectrum. To achieve proper alignment of the ITF images with each science image for the Final Archive reprocessing, the fixed pattern inherent in IUE images is used as a fiducial. Small patches of the science image are cross-correlated against corresponding areas on the appropriate ITF image to determine the spatial displacement between these two images. The displacement of each pixel in the science image from its corresponding pixel in the ITF can thus be determined to sub-pixel accuracy. Such an approach has several advantages: (1) a large number of fiducials can be found anywhere on the image, (2) fixed pattern can be detected even at the lowest exposure levels, and (3) fiducials are available near the edge of the image, where distortion is greatest. In the IUESIPS processing of IUE data, the ITF images have been resampled to geometrically correct space, significantly smoothing these calibration data. In the new processing system, the ITF images are retained in raw space, increasing the accuracy of the pixel-to-pixel photometric correction. Only one resampling of the data is performed in the new processing system, minimizing the smoothing inherent in such an operation. The linearized pixel values are resampled into a geometrically rectified and rotated image, such that the spectral orders are horizontal in the image and the dispersion function of the spectral data within an order is linearized. The resampling algorithm used is a modified Shepard method which preserves not only the flux to 1-3in the image, but also the spectral line shapes. The low-dispersion spectral data are extracted by a weighted slit extraction method developed by Kinney et al. (1991). The advantages of this method over the IUESIPS boxcar extraction are: (1) the signal-to-noise ratio (S/N) of the spectrum is usually improved while flux is conserved, (2) most of the cosmic rays are automatically removed, and (3) the output includes an error estimate for each point in the flux spectrum. The high-dispersion spectral data are extracted using an IUESIPS style boxcar extraction method. As a result the S/N improvements may not be as good as those seen in low-dispersion data. An entirely new data product for the IUE Final Archive is a geometrically rectified and rotated high-dispersion image, with horizontal spectral orders. This new data product will allow future investigators to perform customized extractions and background determinations on the high-dispersion data. One of the most significant problems with the analysis of high-dispersion IUE data has been the proper determination of the background in the region where the echelle orders are most closely spaced and begin to overlap. The new processing system includes a background removal algorithm that determines the background level of each high-dispersion image by fitting, in succession, one-dimensional Chebyshev polynomials, first in the spatial and then the wavelength direction. The extracted high-dispersion spectral data are available order-by-order with wavelengths uniformly sampled within an order. In addition to the new algorithms for processing the IUE data for the Final Archive, all absolute flux calibrations have been rederived. The new calibrations use white dwarf models to determine the relative shapes of the instrumental sensitivity functions, while previous UV satellite and rocket observations of UMa and other standard stars are used to set the overall flux scale. The IUE Final Archive extracted spectral data are also corrected for sensitivity degradation of the detectors over time and temperature, a calibration not previously available with IUESIPS processing. These new processing algorithms for the creation of the Final Archive allow a significant improvement in the signal-to-noise ratio of the processed data, resulting largely from a more accurate photometric correction of the fluxes and weighted slit extraction, and greater spectral resolution due to a more accurate resampling of the data. Improvement in the signal-to-noise ratio of the extracted low-dispersion spectral data has been shown to range from 10-50% for most images, with factors of 2-4 improvement in cases of high-background and underexposed data (Nichols-Bohlin 1990).
OPCFW_CODE
Keyword Analysis & Research: discord id lookup ip Keyword Research: People who searched discord id lookup ip also searched Search Results related to discord id lookup ip on Search Engine Discord IP Finder | Find Someone’s IP Address from Discord WebNov 21, 2023 · Choose a shortener to generate a new shortened tracking link. Send the link to a Discord user. When someone clicks the link, you’ll see the logged client IP address on Grabify. You can then use a IP lookup tool like WhatIsMyIPAddress to get the IP’s approximate geolocation. DA: 69 PA: 100 MOZ Rank: 44 How to Find Your Discord ID, and What It's Used for - Business Insider Web1. Click the gear icon in the bottom-left corner (next to your name), then select Advanced from the left sidebar. 2. At the top of the page that appears, toggle on Developer Mode. Turn on Developer... DA: 45 PA: 78 MOZ Rank: 39 How To Find Discord ID - PC Guide WebFeb 17, 2023 · Do you need to find a Discord ID? Discord IDs are hidden by default but possible to find by following a few simple steps. Firstly, what is a Discord ID and what are Discord IDs used for? Discord IDs are unique numbers that are assigned to every Discord user account, message, and server on Discord. DA: 99 PA: 49 MOZ Rank: 99 Where can I find my User/Server/Message ID? – Discord WebFor User ID, right-click their username. For server ID, right-click the Server name above the text channel list. For message ID, right-click anywhere within the text message. Important Note: To grab the Channel ID, shift+click the Copy ID button for a message. DA: 36 PA: 93 MOZ Rank: 33 Discord User Lookup | Discord ID Lookup | Discord Id Inquiry WebDo you have a server you want to add here? Add your server Join the community user-lookup.metadata.static.description DA: 83 PA: 96 MOZ Rank: 77 User Lookup | DiscordLookup WebFetch Discord Information Get detailed information about Discord users with creation date, profile picture, banner and badges. DA: 56 PA: 51 MOZ Rank: 75 Help | DiscordLookup WebYou can use the Snowflake ID to search for users and guilds or just show the creation date. To find out an ID from a Guild/User/Message, do the following: Make sure you have enabled Discord Developer mode: Desktop: Go to User Settings => Advanced and enable Developer Mode. DA: 37 PA: 42 MOZ Rank: 47 GitHub - discordlookup/discordlookup: DiscordLookup | Get more … WebGet more out of Discord with Discord Lookup! Snowflake Decoder, Guild List with Stats, Invite Info and more... Website & Tools. Getting Help. Translations. Self-Hosting. DA: 72 PA: 22 MOZ Rank: 40 WebSearch for a Discord User or Bot ID. Search for a Discord User or Bot ID. Discord Lookup. Lookup for a Discord User or Bot ID. ... Discord Lookup. Lookup for a Discord User or Bot ID. Learn more. User ID / Any ID. Search. Languages. Made with ... DA: 52 PA: 37 MOZ Rank: 97
OPCFW_CODE
adding generated files to assets directory? What is the correct way to add generated files to the assets directory, and have HIM pick up these changes and copy them over into the appropriate build-time assets location (e.g. inside the Resources folder of an app bundle, or the disk image for emscripten)? There are several use-cases, but the one I'm struggling with now is precompiling metal shaders into metallib files, which I'd like to store in the bundle's assets. I can precompile them using a custom command like: add_custom_command( OUTPUT ${shader_lib} DEPENDS ${shader_source} COMMAND xcrun -sdk macosx metal -std=osx-metal2.0 -O3 "${shader_source}" -o "${shader_lib}" WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}" COMMENT "Compiling Metal shader \"${shader}\" to MLTB library \"${shader}lib\"." VERBATIM ) but even if the OUTPUT ${shader_lib} is placed into the assets directory that HIM is informed about, HIM doesn't know that new files have been placed there upon building. Thanks Hello, Actually, there are 3 distinct moments when assets can be deployed: During CMake configure During the build During the install (make install) Let's see how it is done in HelloImGui: Main assets logic The main part of the assets logic is inside hello_imgui_cmake/assets/: The main file, hello_imgui_assets.cmake will include platform dependent variations for hello_imgui_bundle_assets_from_folder(): # Platform dependent definition of hello_imgui_bundle_assets_from_folder if (EMSCRIPTEN) include(${CMAKE_CURRENT_LIST_DIR}/him_assets_emscripten.cmake) elseif(IOS OR (MACOSX AND (NOT HELLOIMGUI_MACOS_NO_BUNDLE))) include(${CMAKE_CURRENT_LIST_DIR}/him_assets_apple_bundle.cmake) elseif(ANDROID) include(${CMAKE_CURRENT_LIST_DIR}/him_assets_android.cmake) else() include(${CMAKE_CURRENT_LIST_DIR}/him_assets_desktop.cmake) endif() The emscripten variation will do this: # Bundle assets / emscripten version function(hello_imgui_bundle_assets_from_folder app_name assets_folder) if (IS_DIRECTORY ${assets_folder}) target_link_options( ${app_name} PRIVATE "SHELL:--preload-file ${assets_folder}@/" ) else() message(WARNING "hello_imgui_bundle_assets_from_folder: ignoring missing folder ${assets_folder}") endif() if (HELLOIMGUI_ADD_APP_WITH_INSTALL) hello_imgui_get_real_output_directory(${app_name} real_output_directory) install(FILES ${real_output_directory}/${app_name}.html ${real_output_directory}/${app_name}.data ${real_output_directory}/${app_name}.js ${real_output_directory}/${app_name}.wasm DESTINATION ${CMAKE_INSTALL_PREFIX} ) endif() endfunction() target_link_options populates the disk image for emscripten, but since this is provided by emscripten, there is no way to change when it is done. For emscripten, this is done during CMake configure. The install part will only happen during the install step. I guess it is not important here. Copying custom resources hello_imgui_cmake/emscripten/hello_imgui_emscripten.cmake will copy js and css resources from assets/app_settings/emscripten, but those are outside of the disk image. As a conclusion, you should replace add_custom_command by something that is done at configure time, and before the call to hello_imgui_add_app so that assets can be populated correctly. See below, for possible solutions run cmake whenever you want to get newly built shaders Two solutions are possible, I guess: run a script that precompiles the shaders and places them in the assets directory before running CMake call execute_process instead of add_custom_command, because this one is called during configure PS (unrelated, and absolutely not urgent): I saw that once used a PImpl. If you have time someday, and if you have time I would appreciate your inputs if you could study a PImpl generator I wrote (I had this idea when developping the automatic python bindings generator for ImGui Bundle). It is available here: https://pthom.github.io/litgen/litgen_book/20_10_00_pimpl_my_class.html And you can run it online here: https://mybinder.org/v2/gh/pthom/litgen/main?urlpath=lab/tree/litgen-book/20_15_00_pimpl_online.ipynb I've read the documentation of your pimpl generator, though haven't run it. Here are some thoughts: Minor tidbit: I typically don't pollute my global namespace with the MyClassPImpl class declaration. This declaration can be put into the private block within MyClass. I wouldn't want to introduce an extra stage in my development process to make this work, but I assume your python script can be packaged up in some cmake commands to re-generate these things on rebuild if the MyClassPImpl is modified. I think even with those wrappers, I'm not sure I would have a need to use this since my use of PImpl has typically required relatedly little boilerplate compared to how you have structured things. I treat the MyClassPImpl as pure data, with all members public (I actually use a struct) and with no methods (beyond perhaps a constructor and destructor). That means any public methods of MyClass just need to use m_pimpl->member_variable in place of a typical m_member_variable. Any required private functions can still be left out of the MyClass header, and can be implemented as free (static) functions in myclass.cpp which take a pointer to the MyClassPImpl as the first parameter. If you write MyClass from the beginning knowing it will use PImpl, then the only boilerplate is forwarding function calls, but I find this to be a relatively minor inconvenience to write once. Now, if you don't write MyClass with PImpl in mind from the beginning, then it can be quite some work to refactor it once you decide you want pimpl . A one-time script that would take a non-pimpled header and implementation, and convert it to pimpl (with all private members being wrapped in a pimpl struct, and private methods moved to static free functions) would be something I may use. But that is probably harder to get right. FYI: I first learned about PImpl when working on OpenEXR 20 years ago, and this is how that codebase did things. You can take a look at its codebase if you are interested. Thanks for the detailed reponse. As a conclusion, you should replace add_custom_command by something that is done at configure time, and before the call to hello_imgui_add_app so that assets can be populated correctly. See below, for possible solutions run cmake whenever you want to get newly built shaders Two solutions are possible, I guess: run a script that precompiles the shaders and places them in the assets directory before running CMake call execute_process instead of add_custom_command, because this one is called during configure I would prefer to have everything happen via cmake so I don't require running another script. However, I'd like to be able to edit the metal shader source files, and just perform a standard rebuild, without remembering to manually clean and reconfigure. Do you know if there is a way to accomplish this (if shader sources are modified (or generated metallib files change), to inform cmake to rerun the add_custom_command or execute_process before HIM grabs things from the assets directory?). I had a somewhat simpler use-case before (you can check SamplinSafari) that I managed to get to work: there was one dependency which I grab via cpm.cmake which contained a data file that I wanted to use as an asset. Since the data is from the dependency, I didn't want to copy it into my source asset directory, so I instead copied all asset files into an assets directory within the build tree and then pass ASSETS_LOCATION ${CMAKE_CURRENT_BINARY_DIR}/assets to hello_imgui_add_app. The problem was that copying using the file command only happened once, and if I changed other things in the source asset directory (like a shader) it was not re-copied to inform HIM before it does its asset processing. I was able to resolve this by using configure_file with COPYONLY instead of the file command: # copy asset directory to the build tree file file( GLOB_RECURSE MY_RESOURCE_FILES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} # CONFIGURE_DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/assets/**.* ) foreach(MY_RESOURCE_FILE ${MY_RESOURCE_FILES}) configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/${MY_RESOURCE_FILE} ${CMAKE_CURRENT_BINARY_DIR}/${MY_RESOURCE_FILE} COPYONLY ) endforeach() since this introduces a dependency so that cmake knew to re-copy my assets directory before running HIM's asset processing. The situation I have now is only slightly different: some of the files are generated by compiling metal shaders, but I'd still like cmake to know to automatically rerun that part of configure (before HIM processes assets) if anything changes. Suggestions? Thanks for your inputs on PImpl, I will take them into account, and actually I do prefer your way of using a data only impl. I just pushed a commit that updates assets at build time for Linux and Windows. On macOS and emscripten, IMHO everything was working find and still works fine: you just need to add, a line similar to this, so that your app detect the newly compiled asset file. set_property(SOURCE your_app_main_file.cpp APPEND PROPERTY OBJECT_DEPENDS your_compiled_shader_inside_assets) Demo (50" video) Unrelated: I just added dedicated pages for the Hello ImGui documentation, which was a bit raw up until now.
GITHUB_ARCHIVE
THE BROTHERS GRIMM A film review by David N. Butterworth Copyright 2005 David N. Butterworth ** (out of ****) It's hard to believe that seven years have passed since former "Monty Python" animator-turned-feature director Terry Gilliam last made a movie, his 1998 adaptation of Hunter S. Thompson's "Fear and Loathing in Las Vegas" (with Johnny Depp). Actually, seven years have passed since Gilliam last *completed* a movie, one that subsequently landed a distributor and found its way into theaters. If it *feels* like he's made one in the interim that's because of Keith Fulton and Louis Pepe's fascinating 2002 film "Lost in La Mancha," a dead-on documentary that deliciously details Gilliam's passionate--but ultimately failed--attempt to make "The Man Who Killed Don Quixote" (also with Johnny Depp), a disastrous project from start to un-finish in which everything that could go wrong did. Time has not been kind to Gilliam. Watching his latest (completed) picture "The Brothers Grimm," a grim, grimy, and altogether pointless exercise when you come right down to it, you're not exactly convinced he's lost it as a filmmaker but you can't help but feel there's something missing here. A *lot* missing, in fact. Heart, for one thing. And humor, which it valiantly attempts on occasion yet invariably misses. And chemistry, of which there's none. As for elegance or magical enchantment... well, there's none of those either. No, "The Brothers Grimm" is a gloomy affair, a mostly unfunny attempt to dramatize the lives of the great sibling storytellers, Wilhelm and Jacob, or at least stage the spooky circumstances that inspired them to spin such classic fairytales as "Little Red Riding Hood," "Hansel and Gretel," and "Cinderella" among others. Positioned as con men in 19th Century Germany, Will and Jake effect complicated charades to earn their keep as witch hunters and demon slayers, much like what Scooby-Doo's creepy janitor did with all those holograms, wires, and wind machines. Matt Damon and Heath Ledger play the Grimms with a period costumed uneasiness (Heath, in his serious specs, notebook in hand, fares better than Matt but not much). Jonathan Pryce is embarrassingly French--and therefore Pryce-less?--as Gallic Governor Delatombe and Monica Belluci ("The Passion of the Christ") acts playfully disinterested as the Mirror Queen. In fact, the only person who seems to be having any fun at all is Peter Stormare ("Constantine"'s Satan) who relishes his role of Cavaldi, an irrepressible Italian combatant employed as Delatombe's manic henchman. "'Grimm"'s tone is scattershot at best with its special effects likewise all over the map. Given the film's budget, some $80 million, one would have expected more than what's on display here (the wolfman is particularly bad). But murky marks the spot, from murky peasant villages (ala "Jabberwocky" and "Monty Python and the Holy Grail") to even murkier dialogue to (mostly) murkier intents. All told, in the case of Gilliam's seven-year itch it would appear to have taken his extraordinary talent to have turned this "Brothers'" grim. -- David N. Butterworth email@example.com Got beef? Visit "La Movie Boeuf" online at http://members.dca.net/dnb -- firstname.lastname@example.org mailing list http://www.robomod.net/mailman/listinfo/rec-arts-movies-reviews The review above was posted to the rec.arts.movies.reviews newsgroup (de.rec.film.kritiken for German reviews). The Internet Movie Database accepts no responsibility for the contents of the review and has no editorial control. Unless stated otherwise, the copyright belongs to the author. Please direct comments/criticisms of the review to relevant newsgroups. Broken URLs in the reviews are the responsibility of the author. The formatting of the review is likely to differ from the original due to ASCII to HTML conversion. Related links: index of all rec.arts.movies.reviews reviews
OPCFW_CODE
The BIOS Settings (or Setup) Utility is a configuration program that sets the options in the computer's CMOS memory. As explained earlier (in the "What the BIOS Does" section), the BIOS uses the information in the CMOS memory to instruct the chipset how to work with system hardware before the operating system loads its own set of device drivers. To open the BIOS Settings Utility, start or restart your computer and immediately press the key that starts the utility program. On most systems, it's either the Delete key or the F1 key. You can probably see an instruction on the screen that tells you which key to use on either the splash screen or the text that appears when you turn on the computer. If it goes past before you can press the right key, use Ctrl+Alt+Delete to restart the computer and immediately press the key. If neither F1 nor Delete opens the Setup program, and the startup screen doesn't tell you which key to use, look in the computer manual or the motherboard manual. Each BIOS maker organizes the settings and options in the Settings Utility differently, but they all include similar items. The manual supplied with your computer or motherboard usually includes a detailed explanation of every item in the Settings Utility, but unfortunately, many of those manuals seem to be bad translations of originals in other languages. Almost every Settings Utility includes on-screen instructions for moving around the screen and for changing screens. Look for these instructions at the top, bottom, or right side of the screen. In most cases, you can use the left and right arrow keys to choose a screen, and the up and down arrows to move within a screen. The Enter key usually opens a list of options for the current item, and the F10 key closes the program. Don't change any BIOS setting unless you understand exactly what you are doing. Some of the more obscure settings might include options that could cause your computer to completely stop working. The rest of this section explains the setup options that most users might want to change. For explanations of items not included here, consult your computer or motherboard manual. The names of many setup options are slightly different in utilities created by different companies. Don't be alarmed if the menu items on your screen aren't exactly the same as the ones listed here. The date and time settings control the calendar and clock in the CMOS memory. You can use the BIOS Settings program to change these settings, but it's easier to use the Date and Time Properties window in Windows. If you're connected to the Internet, use the Internet Time tab to synchronize your clock and calendar with an online time server that is tied to an international time standard. For each disk drive installed in the computer, the BIOS must identify several technical details, including the capacity, the number of heads, and the number of sectors. All of these values are printed on a label attached to every drive, but most drives manufactured in the last ten years automatically report the necessary details to the BIOS. Follow the instructions in your BIOS Utility to run the auto-detect routine. If you're installing an older hard drive that doesn't supply auto-detect information to the BIOS, copy the values on the drive's label before you mount it in a drive bay. After you reassemble the computer, open the BIOS Setup program and enter those values into the section that applies to that drive, one at a time. The CMOS treats CD and DVD drives and other storage devices connected to the motherboard's IDE or SATA sockets the same way it handles hard drives. When you choose the auto-detect function for a drive channel, the Setup program should identify the type of drive on that channel. If your computer contains a floppy disk drive, the BIOS must instruct the CMOS which type of diskette that drive uses. The Drive A and Drive B settings include several obsolete types, along with the common 1.44MB, 3.5-inch and 1.2MB, 5.25-inch varieties. The Boot Sequence is the order in which the BIOS examines disk drives and other storage devices during startup, when it's looking for the boot loader program. The usual sequence is: Floppy disk (if there's a diskette drive in this computer) CD-ROM or DVD drive Hard disk drive This sequence allows the computer to load an operating system or a startup program from a floppy disk or a CD when a disk is in one of those drives. However, if the drives are empty, the BIOS goes on to use the boot loader on the hard disk. You must change this sequence if you want to use a USB device, such as an external disk drive or a portable flash drive to start the computer. Hard Disk Priority specifies the order in which the BIOS searches for the boot loader when more than one hard disk drive is installed in your computer. The drive with the highest priority should be the one that contains the operating system software. This is normally the drive configured as the Primary Master or the Channel 1 Master. The NumLock key at the top of the keypad on the right side of your desktop keyboard (or a special key on a laptop keyboard) controls the functions of the numeric keypad. When NumLock is off, pressing each key enters the instruction printed on the bottom half of the key (up and down, left and right, Home, End, and so on); when NumLock is on, the NumLock LED indicator lights and each key sends the number printed on the top half of the key to the computer. The NumLock Status option in the BIOS Settings program instructs the CMOS to turn NumLock on or off whenever the computer starts. Some BIOS utilities offer one or more power management options that allow the computer to turn itself on automatically in response to an external signal from a network connection or a modem, or when a user presses a key on the keyboard. Consult your manual for specific details about the options available on your own system. Unless you have a reason to want the computer to start up when it receives a telephone call (through the modem) or an attempt at a network connection, or in response to some other input, it's generally best to disable all of these automatic startup options. Normally, the BIOS automatically detects the latency values of the memory modules installed in your computer, but some BIOS Utilities allow a user to change those settings. Don't mess with these settings unless you are instructed to do so by a qualified technician, such as a technical support representative from the company that produced the computer, the motherboard, or the memory modules. Many motherboards have built-in temperature sensors that constantly monitor the amount of heat at the surface of the CPU and other locations inside the computer. If the temperature exceeds a preset level, the temperature monitor produces an alarm or shuts down the system. Another sensor reports the speed of each fan inside the system. Many BIOS Setup Utilities include one or more options that display the current temperature and fan speeds, and allow a user to change the trigger value for an alarm. This can be useful when Windows produces fatal Stop errors (Blue Screen errors) caused by an overheated CPU. If there's no other obvious cause for the system to overheat, confirm that the fans are all operating properly and blow out any accumulated dust. Another set of sensors measures the voltages produced by the computer's power supply. The BIOS Utility often includes a display that includes the actual value of each power supply output. If the BIOS settings become hopelessly muddled, the computer might not start at all, or if it does, it might not recognize one or more important components. If a well-meaning friend or relative tries to adjust the BIOS settings without knowing what he or she is doing (you would never make a mess of the BIOS settings), or if the CMOS settings become corrupted because of a power surge or some other disaster, the easiest way to restore the system to a usable condition is to load the default settings. The default might not set every option exactly the way you want it, but it loads a configuration that allows the computer to start. After you have undone the damage to the BIOS settings, you can set the correct date and time and make the other changes necessary to restore your own preferred configuration. If your computer's BIOS settings become corrupt, it saves time and reduces confusion if you have a copy of the settings that you can use to restore the system. Either copy each item in every screen of the BIOS Settings Utility with pen and paper, or take a picture of each screen, like the one shown in Figure 8.1. Figure 8.1: Photos of BIOS Utility screens are helpful when you want to return to your CMOS settings. Don't save the only copies of the digital photos on the same computer; you might not be able to open them when you need to restore the system. Print them out and keep them with the manuals and other papers related to your computer. On some computers, you can print a copy of the current BIOS screen by pressing the Print Screen key twice. It doesn't always work, but it's worth a try.
OPCFW_CODE
A Little Background I’m usually a stickler for best practices, but automated testing is something that has eluded me in my professional career for a long time. I typically work on legacy, line of business applications. So automated testing was never a priority. I’ve tried and failed to introduce various development teams to automated testing. Legacy applications are inherently difficult to test. Most everything I’ve read about automated testing points to trivial examples of testing and something like add (1, 2) and making sure the output was 3. As developers, we sometimes tend to want to throw our latest, new-found tech toy at every problem. After recently converting this blog from a WordPress site to a static HTML site using Hugo, of course I’ve decided to try this elsewhere. Here’s what I learned. Benefits of Static There are a lot of examples (1, 2, 3) showing the difference between Static Site Generators (SSG) and a CMS like WordPress. As you can see from the last post, this blog hasn’t been updated in a while. For all intents and purposes, it’s a static site. So why not take this opportunity to test out static site generators? For this, I’m using Hugo. There’s been a lot of stuff going around with these, but I haven’t had a chance to test them out. A dead blog is a perfect place for this. In The Series Part 1 Part 2 Part 3 Part 4 Define The Problem In the past, I’ve done some freelance Web Development and Web Design for different clients. One question I’ve always had to ask myself is: How will the user be updating this website? That question is usually preempted by a question to the client: Do you have any HTML experience? I can count (on one hand) the number of times that I’ve heard a yes to this question. Now, it’s no secret that I can be a moron sometimes, but I’d like to put it on record that it was all me and not WordPress 2.7 that had the issue. So I’m sitting here last night minding my own business and Chris Coyier sends out a tweet talking about how it took him 10 minutes to upgrade. So I figure, why not? I already had the WordPress Automatic Upgrade plugin ready to go, so I figured it would be a breeze. So it’s been about a weeks since it was out, so I figure, why not? After all I have the WordPress Automatic Upgrade plugin, so this should be a breeze. Now this plugin makes upgrading WordPress ridiculously simple. It handles file backups, database backups, deactivating and reactivating all plugins, etc. So I go through the process and I’m not totally disappointed. There were the normal problems we have with all upgrades and some new ones: New Domain I has finally occurred to me that I should have gotten my own domain name a long time ago. Really, I don’t know what I was waiting for, but it was about time. Since my focus is Web 2.0, WebDevelopment2.com was an obvious choice. I’ve already written about moving wordpress to a different domain, so moving to this domain was walk in the park. I loaded up PhpMyAdmin and exported my database. Here’s a quick tip for today: Interlink Your Posts: aLinks Plugin After reading this post, I have come to the conclusion that Web Developers can learn a lot from this. I cannot over stress the importance of number seven (7) and eight (8): Everyone suggests researching webhosting companies, but for your first year, just use a web host that can get the job done. I use Dreamhost, and its fine except for the 20,000+ visit days … If youre serious about your site, get your own domain name that somewhat relates to your topic (obviously cleverdude. Now, I’m going to be deliberately vague because I don’t to give this blogger any traffic. This is what happened. I developed a new a interest lately. As a result, I was looking for a blog that I could use as a reference. All my searches returned this one blog. The name of the blog was specific to the actual topic. I went on the blog and to my dismay, I saw post after post which looked like emails from a mailing list.
OPCFW_CODE
If you must use dynamically-created query strings or instructions Regardless of the risk, effectively quotation arguments and escape any Distinctive figures inside those arguments. By far the most conservative approach is to flee or filter all people that don't go an incredibly stringent whitelist (such as anything that isn't alphanumeric or white Place). Printed versions — I have made this e book readily available for obtain in printed versions in the print-on-desire publisher lulu.com. This is often for usefulness only, for many who wish to have a sure printout in a nice variety. (Remember to do not sense obliged to purchase the printed Edition; I do not make any funds from it! A TCP/IP port employed by cache hosts to transmit info to and with the cache clients. The port number used for the cache port may be diverse on Every single cache host. These settings are preserved within the cluster configuration settings. A industry outlined in a query that displays the results of an expression in lieu of exhibiting saved facts. The worth is recalculated each time a worth inside the expression alterations. A variety of pull subscription for which in-depth information about the membership and also the Subscriber is just not saved. Our experts will gladly share their information and help you with programming homework. Sustain with the globe’s newest programming traits. Programming At first It is really tiny inconvenient view it even though I send him revenue, but Mr. Sarfraj is admittedly magnificent person, who helped me out in productive completion of my project. In Java File Dealing with assignment challenges, Input and output of the data are saved in a file. Generally, in this java assignment, Student really have to use the file for examining and producing the info. Often this problem may be sophisticated or sometimes effortless. — A zip archive containing source code for many of the close-of-chapter exercise routines. These happen to be extracted from the Web content that comprise the options check my reference as a convenience. They don't seem to be included in the Internet site obtain. Begin to see the README file. Dimensions: 322 Kilobytes. Study the distinction concerning declaring a variable, course or perform--and defining it--and why it issues when you have hassle compiling your code I bought a semester project, whose percentage is 25%, with out scoring nicely With this project, I couldn't move In this particular subject matter, My Professor have allotted quite one of a kind project for us. I attempted quite a bit you could try these out on the net but I could not get, whilst searching I got lovelycoding.org Another way that useful languages can simulate state is by passing all over a data framework that signifies The existing state as being a parameter to operate phone calls. A way for identifying dependencies click for more info in observations taken sequentially in time, that also supports multiplicative seasonality. A registration model that gets rid of all certification subscriber participation from the administration plan. For that workflow, a consumer selected because the originator will initiate the request and an enrollment agent will execute the request.
OPCFW_CODE
Hello fellas, was glad when i stumbled upon your forum, as it was just what i needed to get some help I was looking to treat myself to an awesome new gaming computer. 1) FPS > nearly all. I am tired of having low FPS in games that i like, or "lagging" of games. I want outmost performance. 2) Extendability/Adjustability. If there is a choice between something that will become better later (i.e. by having a new version of socket or w/e) and something that wont, i believe the first is better 3) Cooling. I dont want my PC to melt through my floor and fall down to my neighbours below, so i want some efficient cooling, but (if possible) not as loud as to make me unable to hear the game's sounds Money: I have a vast budget, but if something provides a miniscule upgrade for a huge cash sink increase, id rather skip it. Here is what i am currently looking at, but im pretty sure knowing people can help me "modify" it a bit to become better. 3x 120mm case fans case fans controller - do i need one? If yes, what about NZXT Sentry 2? power supply - Anntec 1000W TruePower Quattro processor - i7 980x, overclocked to 4GHz per core water cooling - basic water cooling device Mother - Asus Rampage III Extreme GFX - dual crossfire ATI HD 5970 RAM - 12GB PC15000 1866MHz DDR3 Is there anything you would say i should add/remove/change before i start looking around Etailers for a quote? I will not be building it myself, will probably use Arbico or similar. EDIT: Unfortunately this config at arbico is way way overpriced. I believe i will be looking at the following configuration from MESH: Intel® Core™ i7 960 Quad Core Processor (3.20GHz,8MB Cache) - LGA 1336 Microsoft® Windows® 7 Home Premium Thermaltake Element 'V' Gaming Case 1200W Cougar Desktop Power Supply Akasa Freedom Tower Heat pipe quiet cooling Asus P6X58D-E-USB3 - Intel Core™ i7 & i7 Extreme Edition - LGA1366 Socket(ATX) G.Skill 12GB DDR3 1600MHz Memory (6x 2GB KIt) 128GB MLC SSD SATA Solid State Drive 2TB (2x 1000GB) Serial ATA 2 Hard Drive with 32MB Buffer Blu-Ray Combo Optical Drive (Blu-ray ROM, DVD/CD RW) 2x 2GB ATI Radeon HD5970 - CrossFireX Configuration Multi-Format Memory Card Reader -(52-in-1 Internal) 7.1 High Definition onboard sound card - for 8 Channel Cinema sound Supports up to 8 USB 2.0 ports (4x mid-board, 4x back panel) + 2x USB 3.0 ports - P6X58D-E 2 x 1394a port(s) (1 at mid-board; 1 at back panel) - P6X58D-E Gigabit LAN, featuring AI NET2 - ASUS P6X58D-E ASUS PCE-N13 Internal 802.11N WiFi Adapter - 300Mbps So if you have any thoughts, refer them towards this config rather than first one
OPCFW_CODE
Hello, today I propose you a quiz to test your knowledge on the acronyms of the web languages, some will seem easier than others of course, but do you really know all the acronyms of the web languages ? For all questions, a list of answers is proposed, only one answer is correct. For each question, feel free to note your answer on a piece of paper in order to count your final score after the correction. You are ready, so let's go. I - API means : - Abbreviation Program Internet - Application Programming Interface - Accessible Page Internet II - HTML means : - HyperTheme Markup Language - HyperText Markup Language - HyperTheme Model Language III - CSS means : - Cascading Style Software - Cascading Style Sheets - Cascading Structured Style IV - DOM means : - Document Object Model - Document Object Markup - Document Optimization Markup V - IDE means : - Integrated Development Experience - Integrated Document Explorer - Integrated Development Environment VI - HTTP means : - HyperText Transfer Page - HyperText Transfer Protocol - HyperText Technology Power VII - ARIA means : - Accessible Rich Internet Applications - Asynchronous Rich Internet Applications - Array Rich Internet Applications VIII - AJAX means : IX - JS means : - Justify String - Java Sring X - JSON means : XI - AMP means : - Applications Markup Pages - Attribute Markup Pages - Accelerated Mobile Pages XII - REGEX means : - Regular Expression - Regrown Experience - Regenerative Expression XIII - SQL means : - Switch Query Language - Simple Query Language - Structured Query Language XIV - CDN means : - Classical Document Number - Classical Delivery Network - Content Delivery Network XV - SEO means : - String Expression Object - Search Engine Optimization - Suffix Expression Object XVI - PHP means : - Hypertext Page Preparator - Hypertext Preprocessor - Page Hypertext Preparator XVII - UX means : - User Experience - Universal Explications - Universal Experience - User Explications I hope you have written down your answers as it is time for the correction. API means Application Programming Interface (2). It allows applications to communicate with each other and exchange services or data. HTML means HyperText Markup Language (2). It is the code used to structure a web page and its content. CSS means Cascading Style Sheets (2). It is the language we use to style an HTML document. DOM means Document Object Model (1). It is a programming interface standardized by the W3C, which allows scripts to examine and modify the content of the web browser. IDE means Integrated Development Environment (3). It is a software application that provides comprehensive facilities to computer programmers for software development. HTTP means HyperText Transfer Protocol (2). It is a client-server communication protocol developed for the World Wide Web. ARIA means Accessible Rich Internet Applications (1). ARIA complements HTML so that interactive elements and widgets can be used by assistive tools when standard functionality does not allow it. AMP means Accelerated Mobile Pages (3).It is an open source technology developed by the AMP Open Source Project and supported by Google. It allows as its name suggests to load mobile pages faster. REGEX means Regular Expression (1). It is a character string, which describes, according to a precise syntax, a set of possible character strings. Regular expressions are also called regex (a word-value formed from the English regular expression). SQL means Structured Query Language (3). It is a standardized programming language used to operate relational databases. CDN means Content Delivery Network (3). It is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. SEO means Search Engine Optimization (2). It is the set of techniques that aim to improve the positioning of a page, a site or a web application in the results page of a search engine. PHP means Hypertext Preprocessor (2). It is a recursive acronyms. Php is a free programming language, mainly used to produce dynamic web pages via an HTTP server, but can also work as any locally interpreted language. PHP is an object-oriented imperative language. UX means User Experience (1). This refers to the quality of the user's experience in any interaction situation. Feel free to share your score in the comments or suggest other web acronyms you think we should know. 👍 Top comments (13) You haven't included the correct answer for PHP. PHP is a recursive acronym for PHP Hypertext Preprocessor This question is a trap since the letters are not in the right order. What do you mean? The letters are in the correct order Where does the first letter come from ? I wanted to say that compared to the other propositions of the quiz, this one was not obvious since the answer does not start with a P The P comes from PHP... as I said, it's a recursive acronym. Other examples include: For php historically, this recursive acronym was the abbreviation of Personal Home Page; in 2008, the recursive acronym is the official meaning of PHP. So it is a recursive acronym but for me it is a special case. Yes, it was originally Personal Home Page. How is it a special case? It's no different to the other recursive acronyms It later became a recursive acronym, but it was not initially recursive, unlike many others. But I have now specified in the article that PHP is a recursive acronym. Thanks. I was sure PHP was "Piled Hot Poop", but it seems I was wrong all along! 🤣 i know them all btw you haven't included php ? The question XVI is about PHP.
OPCFW_CODE
As a newcomer to crypto, you may be excited to make transactions using blockchains like Ethereum. But you may be wondering, how can I check the transactions I make? In this blog post, we will guide you through the process of checking Ethereum transactions, and give you some helpful tools to monitor them. Introduction to Ethereum Transactions Ethereum is a decentralised blockchain platform that enables users to send and receive digital assets. Each transaction on the Ethereum network is recorded on the blockchain, making it transparent and publicly accessible. These transactions are verified and added to the blockchain through a consensus process. Validation is done by users of the network, making the network highly decentralised. How do Ethereum Transactions Work? Before we dive into the steps of checking an Ethereum transaction, let’s briefly understand how Ethereum transactions work. When you send Ether from one address to another, you initiate a transaction. This transaction is then broadcasted to the Ethereum network, where it is verified by the network’s validators. Once the transaction is confirmed, it is added to a block and permanently recorded on the blockchain. Read more: What Is Ethereum 2.0 Steps to Track an Ethereum Transaction Now, let’s explore the steps you can follow to track Ethereum transaction: Step 1: Obtain the Transaction Hash To begin tracking your Ethereum transaction, you will need to obtain the transaction hash. A transaction hash is a unique identifier assigned to each transaction on the Ethereum network. You can usually find this hash on the crypto wallet or exchange platform you used to make the transaction. Look for a history or activity section to find the hash. Step 2: Visit an Ethereum Blockchain Explorer Once you have the transaction hash, visit an Ethereum blockchain explorer. Blockchain explorers are online tools that allow you to search and view information about transactions, blocks, and addresses on the Ethereum blockchain. Some popular Ethereum blockchain explorers include Etherscan and Etherchain. Choose any of these explorers and proceed to the next step. Step 3: Enter the Transaction Hash On the blockchain explorer website, you will find a search bar or a specific section to enter the hash. Paste the hash you obtained in Step 1 into the search bar and start the search. The explorer will then retrieve and display the transaction details. Step 4: Review the Transaction Details After entering the hash, you will be able to see the transaction details. This page will provide information such as the sender and recipient addresses, the transaction amount, the gas fee paid, and the current status of the transaction. You can use this information to verify if the transaction was successful. Tools for Ethereum Transaction Monitoring Apart from blockchain explorers, there are also other tools available for monitoring Ethereum transactions. These tools provide additional features and real-time updates on your transactions. Some popular options include: - Metamask: Metamask is a popular Ethereum wallet and browser extension that allows you to manage your Ether and interact with decentralized applications. It also provides transaction history and notifications for your transactions. - MyEtherWallet (MEW): MyEtherWallet is a web-based wallet that provides a user-friendly interface for managing your Ethereum transactions. It offers transaction tracking features and the ability to customize gas fees for faster transactions. - Etherscan Mobile App: Etherscan has a mobile app available for both iOS and Android devices. The app enables you to check your Ethereum transactions on the go, receive push notifications for transaction updates, and explore the blockchain. Read more: What Is Ethereum Sharding How Long Does ETH Transaction Take? The time it takes for ETH transactions to be confirmed can vary depending on various factors. These factors include network congestion, ETH gas price, and the complexity of the transaction. In general, Ethereum transactions can take anywhere from a few seconds to several minutes to be confirmed. Typically, ETH transactions per second can reach 30. However, during periods of high network activity, it may take longer for your transaction to be processed. Checking an Ethereum transaction involves using a hash and a blockchain explorer to review transaction details. Blockchain explorers like Etherscan provide transparency and enable users to track their transactions on the Ethereum blockchain. Additionally, tools like Metamask, MyEtherWallet, and the Etherscan Mobile App offer convenient ways to manage your Ethereum transactions. Read more: Ethereum Price Predictions
OPCFW_CODE
how to show SE along with dots in a dot plot in Stata I want to make a dot plot with showing mean and SE in the plot in Srara. But, I am not able to make the SEs in graph. Anyone has any example code to do that? I have a continuous value in the y-axis and 6 variables type in my x-axis. I have only used the code below: dotplot val1 val2 val3 val4 val5 val6, mean A reproducible data can be see as following: clear set obs 100 set seed 12345 forval j = 1/6 { gen val`j' = rnormal(`j' * 10, `j' * 2) } rename val1 baseline1 rename val2 vrs rename val3 sgbd rename val4 baseline2 rename val5 fdst rename val6 sgvf but I expect to get the SE in graph and also mean. You could collapse to get a dataset of mean and SE of the mean. Then fire up twoway scatter and twoway rcap or twoway rspike. dotplot won't take you where you want to go. Here is some technique -- in the absence of a reproducible example from you. clear set obs 100 set seed 314159 forval j = 1/6 { gen val`j' = rnormal(`j', `j') } * you start here preserve forval j = 1/6 { local call `call' (mean) mean`j'=val`j' (semean) semean`j'=val`j' } collapse `call' gen id = 1 reshape long mean semean, i(id) j(which) gen upper = mean + se gen lower = mean - se twoway scatter mean which || rcap upper lower which, xla(1/6) ytitle(val) xtitle(which) restore EDIT Making use of the reproducible example, here is some sample code. clear set obs 100 set seed 12345 forval j = 1/6 { gen val`j' = rnormal(`j' * 10, `j' * 2) } rename (val1-val6) (baseline1 vrs sgbd baseline2 fdst sgvf) * ssc install stripplot stripplot baseline1-sgvf, bar(level(68)) stack /// width(2) vertical msize(vsmall) ms(Sh) thank you. but I would also like to get the observations (dots ) in the graph. Could you please modify so that I can have it? See stripplot from SSC. Could you please change your code so that works with stripplot? I could not make it to work. I appreciate it. https://stackoverflow.com/help/minimal-reproducible-example explains the standard here, namely self-contained questions with a reproducible example and a serious attempt at code. Dear Nick Cox I have edited my question with the reproducible data to generate the data example. I hope it is fine, otherwise, please let me know. Thanks a lot for your kind help. thanks. probably my example is not a good one. However, is it possible to explain what does ? I should get mean and SE with dots in the graph. "stripplot baseline1-sgvf, bar(level(68)) stack /// width(2) vertical msize(vsmall) ms(Sh)" What is plotted is a 68% confidence interval for the mean, which should close to mean +/- SE of mean. For that sample size, the intervals are pretty short.
STACK_EXCHANGE
Welcome to the clevrML Demo. In this demo, you will experience what it is like to make an AI model on the clevrML platform. The way AI learns to do a task is by showing it examples of whatever task you want to solve with answers next to the examples. The examples are called our dataset and answers are called Labels. Typically, you will call a group of data with the same label as a class. Let's take a look at our dataset with the corresponding labels: from clevrml import key = os.environ[ "API-KEY" ] model = Image_Model( ) Your model has successfully been built! This is our dataset that will help our AI model learn how to recognize various traffic objects. The images are the actual data and the name below is the labels for the image. This may seem very obvious for humans, but for computers, this isn't as easy. We need to help our model by providing the labels with the corresponding image so that it can generalize for later. One advantage of using clevrML is the amount of data needed to build AI models. clevrML has built a world-class technology called "Active Memory Learning" which allows you to supply as little as three examples per "class" (a group of labels) to generalize well in a task. Contrast this with Neural Networks (a popular AI method) would require ~10,000 images of Stop Signs, traffic lights and Bicycles (a total of 30,000 images in a dataset). Let's start building our model. We need to first make classes (groups) for Traffic Lights, Stop Signs and Bicycles. When using the clevrML Image Model API, classes need to be in a folder with the corresponding data. For this demo however, this is all been done for you (ie: The folder on the right side of the screen "/Home/Traffic-Light-Data/" Let's start with Traffic Lights. Click on all the Traffic Light images to make our "Traffic Light" class. Hint: Click on the images with a yellow border. Now we need to build the Stop Sign class. Like the previous step, click on all the images of Stop Signs to add the images to our Stop Sign folder. Hint: Click on all the images with a yellow border. Finally, let's add the remaining data to our Bicycle class. Click all the images of Bicycles to add the images to the folder. Hint: Click all of the images with a yellow border. Typically when using the clevrML SDK, you need to provide the model with the names of the classes so the model knows what to tell you is being predicted. For this demo, this step has been done for you automatically. We need to give our model a name. For this step, pick any name you would like. Our code is built and ready to use. Click the "Run Code" button below to send the request to the Image Model API Our model has been built using the Image Model API. What you see is the console output from clevrML's servers telling us all the information we need. Typically, the building process can take hours, days or even weeks in runtime with Neural Networks. On clevrML however, building a model takes seconds thanks to Active Memory Learning. Before starting, what is this code? This is a snippet of the official Python Software Development Kit (SDK) for clevrML. The purpose of an SDK is to make APIs easily accessible for the developer by providing ready-made code that can easily communicate with our servers in the cloud (ie: where all AI models on clevrML are built and deployed) For the purpose of this demo however, you do not need to know any code. In this demo, you will use an easy UI to build this code snippet to use the clevrML API.
OPCFW_CODE
Azure for the AWS User (Part 1): Identity If you've been an AWS fan, you might find yourself taking a peek at Azure's offerings. This handy guide will translate AWS' IAM roles and security into Azurese. Join the DZone community and get the full member experience.Join For Free fi've seen a few forum questions lately from aws users who want to (or have to) use azure, and while there are a lot of similar services in either platform, the new user experience and terminology can be very confusing if your used to aws. this article is the first in a series of posts that i'm hoping will help users coming from aws get to grips with azure. to be very clear, i'm not looking to argue about which platform is best or why you should use one or the other, i'm simply providing the information an aws user needs in order to quickly get a grasp of azure and relate it to what they already know. i'll be keeping things pretty high level, and i'll also be focussing on the newer azure resource manager stack for the most part, as this is what i would advise anyone coming new to azure to use, except where there isn't an arm version. so, for the first part in this series, we'll take a look at identity, which is usually one of the first areas you'll come up against when trying to gain access to do work in azure. aws iam and azure active directory aws users will be familiar with iam (identity and access management) as the means to provide user access to aws, permissions to resources, groups, and roles. the azure equivalent of this is azure active directory (aad), don't be fooled by the name, however, it's not a full-blown cloud version of microsoft's on-premises active directory. whilst it does act as an identity store and authentication provider, it doesn't have the ldap functionality of ad or many of the other services (machine join, gpos, etc.). there is an extension to this that we will talk about later that adds some of this, but for now, think of it as an identity store for azure. as an azure administrator, you can create multiple different aad identity stores (usually referred to as tenants) which operate independently. when you create azure resources they will usually be tied to a specific tenant and you can grant users in that tenant access to manage the resources. it is possible to grant users from other tenants access to resources, we will cover how later. aad obviously allows you to store users, each user in a tenant can be one of three types: - an aad user created and "homed" in this tenant. - an aad user "homed" in another tenant, who has been added to this tenant to access resources. - a microsoft account (formerly live account) which is granted access to this tenant. the first type of user is the most commonly used and is directly equivalent to a user created in iam. the later two are only really used when you need to grant a user that already exists elsewhere (in another aad tenant, or an ms account) access to resources rather than create a new user. there are a few more complex concepts that can create users such as ad sync and federation, that will be discussed later. further reading: managing users in azure active directory . groups exist in a very similar manner to iam, you can add users to groups and then assign rights to groups as required. further reading: managing access to resources with azure active directory groups roles in the sense of iam roles that can be assumed by a vm or similar don't really exist in azure. what azure does have is the concept of applications and service principals. applications are, as the name suggests, a way to register an application to get access to your identities. these can be both applications you have developed yourself and off the shelf applications which are built to work with aad (office 365, salesforce etc.). a service principal is an identity assigned to these applications, that will be used by a specific application (or set of applications) to assume and gain access to azure resources. an application would use the service principal by supplying either a set of keys or a certificate. further reading: application and service principal objects in azure active directory azure provides a role based access control (rbac) system to allow granting of permissions to resources. permissions can be granted at the subscription, resource group, or resource level and can be very granular. it is also possible to create your own rbac roles if the built-in ones are not suitable. roles are assigned to users, groups or service principals either through the azure portal, powershell or the various api's. it should be noted that rbac is applied through the new portal (portal.azure.com), and requires a resource to have implemented them, but most have now. in the days of using the old portal (manage.windowsazure.com) rbac did not exist and users could only be granted full administrator rights. if you need a user to manage a service with does not support rbac you will need to assign them rights through the old portal, see the managing resources section. further reading: get started with access management in the azure portal . user, group, and permission management can be undertaken from the azure portal. as mentioned before, there are two portals you can access, the new one ( portal.azure.com ) which you want to use whenever possible, and the old one ( manage.windowsazure.com ), which unfortunately you may still have to use for some services. azure ad is one of the last services to move out of the old portal, and some of the services are still there. fortunately, user and group creation and permission management can all be done through the new portal, simply go to "more services" on the left menu bar and search for "azure active directory." this will open the aad blade and from here you can manage users, groups etc. should you need to grant users access to manage resources in the old portal, you will need to connect to manage.windowsazure.com, then go to the settings section, click on administrator and then click the add button. this user will have full admin rights on all resources in that subscription. by default when you create an aad tenant you will get a domain name of something.onmicrosoft.com, this will be used as the suffix for all user login accounts. if you would prefer to use a custom domain name you can set-up azure ad to use this. at the time of publishing this article, this needs to be done through the old (manage.windowsazure.com) portal, but i imagine it will be available in the new portal soon. like iam, aad has a programmatic api that can be used to query aad using rest, including using it as an authentication provider for your own apps. this is referred to as the graph api . further reading: operations overview | graph api concepts aws ad connector and azure ad connect both aws and azure provide a way to bring your on-premises identity into the platform rather than manually creating users and groups. azure does this through azure ad connect. ad connect provides a few services: - user sync: the simplest approach is to run this tool on a server and have it regularly sync users and groups from on-premises to azure, this can include password hashes if you wish. users can then use their on-premises credentials when presented with an aad login. - federation: this is a second, optional, step that can be applied to federate your on-premises domain to aad. this then allows for true single sign on to aad resources - pass-through: this is a very new preview feature that allows you to pass user authentication requests from aad straight through to your on-prem ad, so no syncing required, passwords always remain on premises. further reading: integrating your on-premises identities with azure active directory , what is azure ad pass-through authentication aws directory service and azure ad directory services i mentioned earlier that aad does not provide all the services of on-premises ad, including ldap etc. azure ad directory services (aad ds) is the azure equivalent of aws directory service, it provides an extension to aad that adds basic domain controller functionality to aad, this includes: - ldap support - machine join - simple group policy - organisational units aad ds does have a number of limitations, which i discuss in more detail in this article , so don't assume that this can just be a replacement for your on-premises domain controllers. further reading: active directory domain services documentation . that's a very high-level overview of the identity services in azure ad. hopefully, this gave you enough information to get started and an idea of the right places to look to get more information. in part 2 of this series, we will look at iaas and virtual machine services and how they compare. Published at DZone with permission of Sam Cogan, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. Mastering Time Series Analysis: Techniques, Models, and Strategies Microservices With Apache Camel and Quarkus Implementing a Serverless DevOps Pipeline With AWS Lambda and CodePipeline Automating the Migration From JS to TS for the ZK Framework
OPCFW_CODE
Are you a Google Analytics enthusiast? More SEO Content Do You Recommend Adding Back Links To Social Bookmarking Sites? Posted 13 June 2009 - 05:31 PM I have noticed that Google displays higher PageRanked sites with some sort of relevancy and does not show links from forums or guest books as much, if at all. This is from using the "LINK:www.mysite.com" entry in Google search. I noticed that using other SEO tools or simply using "mysite.com: enclosed in quotes will show up those other links. What I'm getting at is this. Do you feel adding links to social bookmarking sites is... 1) Time WELL spent or... 2) Time, well... spent! I'm happy to do the work as long as I know the results will be there. I mean could you imagine someone working for weeks and finding out they submitted to all links made with "nofollw" tags? OUCH! Anyway, I'm considering adding my links to some bookmarking sites while I wait for the more relevant link trades, etc. but I wanted to find out what others here feel is the effectiveness of them before I get too involved. Posted 13 June 2009 - 07:02 PM searches on Google, don't trust those. It's notoriously inaccurate. Even Google themselves admit they don't show every link they know about. Second, I hope I'm misunderstanding your question. If you're proposing going around and creating accounts in all of these different places just to drop your link in the profile or whatever, that's just spam. Not to mention that as you've already mentioned lots of place nofollow those as a default action. In other words, if you're not planning on actually contributing something to the SM site, forum, blog or whatever you shouldn't be creating an account there. Only create one in those places where you're going to be an active participant. Posted 13 June 2009 - 09:09 PM Posted 14 June 2009 - 04:54 AM I have accounts on some of them, and when i create an article landing page for a term i generally add it to the social bookmarking / article sites too... it definitely doesnt do any harm and it has been known to generate indexing of a new page within hours, sometimes minutes... although all the times i've checked i've only seen this twice, so probably just got lucky. Forums are a great way of getting good relevant links, as the subject is hopefully exactly that of the receiving page. Not only that, but you'll find you get a lot of clickthroughs from the thread. Keep it on topic and relevant and you may even find yourself helping people out , rather than just blatantly spamming the forum for a link. This is why most forums have a mininum post count. Guesbook links.. no.. too spammy for me. Dont degrade yourself. This is the last resort as far as i'm concerned. Nothing wrong with links on these type of sites in general. Nothing right about them either.. they work, they're not spammy, they just arent very professional. Posted 14 June 2009 - 09:03 AM Please see the pinned thread about Google's link: command at the top of our link building forum. Posted 15 June 2009 - 11:59 AM Edited by bobmeetin, 15 June 2009 - 01:44 PM. Posted 15 June 2009 - 01:37 PM Anyway, I appreciated the comments from adibranch. I think what I will do is expand the use of the forums I use that relate to my site but find ones that support linking. That is a win-win situation. Iíll also take the same approach for bookmarks. Guest books, no. I want to state that I really appreciate the mods at High Rankings. I agree with the over-all theme here, which is to build good site content, good site structure, and ask yourself is what I'm ding good for my visitors? After all, that is the goal of all search engine algorithms. As those algorithms improve over time, sites who adhere to the High Rankings principles should get better and better rankings. "Computers are incredibly fast, accurate, and stupid. Human beings are incredibly slow, inaccurate, and brilliant. Together they are powerful beyond imagination." Anyway, I'm no Einstein when it comes to SEO, but I'm glad to know there are a few of them around here. I especially appreciated reading about the advice in the Google pinned thread to not put bad outgoing links on a site. That made a lot of sense. The discussion on "nofollow" was great, too. I have one additional question about in-site links to your own pages vs. outbound links to other sites. I get that search engines start thinking "link farm" when too many outbound links are present but please tell me (providing it is the truth, of course) that search engines ignore the number of in-site links to your other pages. I mean I need several of those to cities, well over 100 and they are all one one page. I'll keep it that way, because it is best for the visitors but I would like to know if it is accepted well or not by search engines, just so I have the knowledge. What say you? Posted 16 June 2009 - 01:01 PM Actually, they think "link farm" when they come across a network of domains, all of which link to every single one of the other domains in the network. That's pretty much the definition of a link farm. The number of "off domain" links on a single page doesn't have anything to do with link farming (unless all those pages are linking back to you and also linking to each other as well). What the spiders think when they come across a page with a lot of outbound links on it is: "gotta make a note of those links for somebody to follow up on." (Which is pretty much the same thing they think when they come across any page with links on it, internal or external, no matter how many links there are.) The SEs expect you to have internal navigation. Sometimes, especially on large sites, that internal navigation gets pretty "linky." But it wouldn't make any sense for them to penalize a site for having robust internal navigation. So they don't. SEs don't ignore that internal navigation, not at all. It counts perfectly well -- and can be an excellent source of good keyword-rich link anchor text pointing at your interior pages. In fact, for most sites, your own "on site" links are the primary means by which those internal pages get "link juice." Without those "on site" links, for many sites, there would be almost no way any interior page could rank for, well, anything, because they just don't have enough "off domain" inbound links otherwise. Use as many links as make sense for your human visitors. Use them in a way that makes sense for your human visitors. The SEs will sort it out. Posted 18 June 2009 - 12:47 PM It wouldn't make sense to me that they would penalize you for your own internal links. but I that was just my assumption. I try to avoid the "A" word whenever possible; so, thanks so much for clearing that up for me. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
|< Day Day Up >|| | Most networks use password-based authentication to secure network communications such as VPN and wireless connections. When used with secure authentication methods such as MS-CHAP v2 (for PPTP VPN connections) or WPA (for wireless connections), password-based security can be quite secure. However, password-based authentication can be inconvenient (you must remember a password or network key), and doesn’t ensure the integrity of transmitted data—an industrious hacker could intercept, replay, and tamper with data. One way of addressing these issues is to sign communications with a digital certificate. Doing so also enables clients to verify the identity of the server (reducing the risk of rogue servers), and to digitally sign and encrypt e-mails. Digital certificates are required by L2TP VPN connections and 802.1X authentication of wireless networks. The first step in setting up 802.1X authentication or L2TP VPNs is to install Certificate Services and create an enterprise root Certificate Authority (CA), which can then be used to deploy certificates to users and computers on the network. To do so, complete the following steps: Open Add Or Remove Programs in Control Panel and then click Add/ Remove Windows Components. The Windows Components Wizard appears. On the Windows Components page, select Certificate Services in the component list. The installer warns you that after the CA software is installed, you can’t change the name of the server or move it into or out of an Active Directory domain. Click Yes, and then click Next. On the CA Type page (Figure 15-9), select Enterprise Root CA and then click Next. On the CA Identifying Information page (Figure 15-10), type a descriptive name for the CA (most likely including the company name) and then click Next. Figure 15-9: The CA Type page of the Windows Components Wizard. Figure 15-10: The CA Identifying Information page of the Windows Components Wizard. On the Certificate Database Settings page, accept the default storage location for the certificate database and log files and configuration information. Note that the location you specify isn’t where issued certificates are stored; it’s where the CA’s own certificates are stored. Click Next. If the computer acting as the Enterprise Root CA crashes and you lose the CA database, you must reissue every certificate. Consider this extra motivation to regularly back up your entire Windows Small Business Server installation. Click Yes when prompted to stop Microsoft Internet Information Services. When prompted, insert the appropriate Windows Small Business Server 2003 CD or DVD and then click OK. The Windows Components Wizard completes the installation of Certificate Services. Click Finish when it’s done. To request computer and user certificates for client computers, first create a console on the client computer that displays the Certificates (Local Computer) and Certificates (Current User) snap-ins. To do so, complete the following steps: On a client computer, click Start, choose Run, type mmc in the Open box and then click OK. This opens a blank Microsoft Management Console (MMC). Choose Add/Remove Snap-In from the File menu. The Add/Remove Snap-In dialog box appears. Click Add, and select Certificates in the Add Standalone Snap-In dialog box, and then click Add again. In the Certificates Snap-In dialog box (Figure 15-11), select Computer Account, click Next, select Local Computer, and then click Finish. Figure 15-11: The Certificates Snap-In dialog box. In the Add Standalone Snap-In dialog box, select Certificates again and click Add. The Certificates Snap-In dialog box appears. Select My User Account and then click Finish. Click Close and then OK. This displays the MMC console with the two Certificates snap-ins. Choose Save As from the File menu and then save this to a network share so that you can use the console from any computer on the network. After creating a console that displays the Certificates (Local Computer) and Certificates (Current User) snap-ins, use the following steps to request and install computer and user certificates on a client computer. (But first join the computer to the domain, as described in Chapter 12.) While connected to the network using a wired network connection, a wireless connection using 802.1X authentication with PEAP-MS CHAP v2, or an existing (PPTP) VPN connection, expand the Certificates (Local Computer) container, right-click Personal, choose All Tasks from the shortcut menu, and then choose Request New Certificate. Click Next on the first page of the Certificate Request Wizard. Select Computer on the Certificate Types page (Figure 15-12), and then click Next. Figure 15-12: Requesting a new certificate for the local computer. On the Certificate Friendly Name And Description page, type a friendly name and description for the certificate. Click Next and then Finish. Click OK in the dialog box that appears if the request was successful. A new certificate is then created in the Certificates (Local Computer)\Personal\Certificates folder. Expand the Certificates (Current User) container, right-click Personal, choose All Tasks from the shortcut menu, and then choose Request New Certificate. Click Next on the first page of the Certificate Request Wizard, select User on the Certificate Types page (Figure 15-13), and then click Next. Figure 15-13: Requesting a new certificate for a user. On the Certificate Friendly Name And Description page, type a friendly name and description for the certificate, click Next, and then click Finish. Click OK in the dialog box that appears if the request was successful. A new certificate is then created in the Certificates (Current User)\Personal\Certificates folder. Just to be safe, expand the Trusted Root Certification Authorities container in either snap-in, select Certificates, and verify that the enterprise root CA that you created on the Windows Small Business Server computer appears in the list. (In our case, the enterprise root CA is Example Company Internal Certificate Authority.) You can configure clients to automatically request computer certificates, install the trusted root certificate from the Windows Small Business Server, and receive the proper 802.11 settings by using Group Policy. This is the best way to deploy 802.1X authentication settings to clients once you’ve tested the system. For information on how to do this, see the “Using Group Policy to Automatically Configure 802.11 and Certificate Settings” section later in this chapter. The Windows Small Business Server computer should obtain a domain controller certificate so that it can validate its identity to clients for L2TP VPN connections and 802.1X authentication. To do so, first install Certificate Services as an enterprise root CA (as discussed earlier in this chapter), and then use the following procedure to request a certificate from the CA. Open the Certificates (Local Computer) console. See the “Creating A Local Computer and Current User Certificates Console ” section earlier in this chapter if you have yet to create this console. Right-click the Personal container, choose All Tasks from the shortcut menu, and then choose Request New Certificate. The Certificate Request Wizard opens. Click Next on the first page of the Certificate Request Wizard, and on the Certificate Types page, select Domain Controller. Click Next to continue. On the Certificate Friendly Name And Description page, type SBS Server Certificate in the Friendly Name box, optionally type a description, and then click Next. Review the settings and then click Finish. Click OK in the dialog box that appears, which states that the certificate request was successful. (If this doesn’t appear, there’s a problem with Certificate Services.) |< Day Day Up >|| |
OPCFW_CODE
If you want to actually BE IN some of the pictures you are taking, then you will find life much easier if you have either a wireless remote or interval timer. Both of these work by allowing you to press the shutter remotely, so you can actually be in front of the camera but still in control. They both work a little bit differently: a wireless remote is a small transceiver that you simply point at the camera. They are small and inexpensive, and allow you to take a picture as and when you would have pressed the shutter if you were behind the camera. An Interval Timer (also called an Intervalometer) is usually wired to your camera, and you manually set it to take a set amount of pictures over a set amount of time - for example, to take 50 photos, with one taken every 5 seconds. Once you have set it, you just leave it to "press the shutter" in the intervals you have set. I have both a remote timer and an interval timer, and I much prefer the interval timer because I feel a lot less "forced" and I can interact naturally with my child without having to try and point the remote timer in the direction of the camera at the same time, and keep it out of view. That said, there are pros and cons for both: Why Choose an Interval Timer - You don't have to quickly try to hide the remote in your hand whilst taking the picture. - They allow you to be more relaxed, so you can carry on with whatever you are doing without feeling "posed" - Great for candid shots of you interacting with children etc - You can set it it to take as many shots as you wish, so you can set it up and then just leave it. Why choose a Wireless Remote: - You won't end up with as many wasted shots as you will with a interval timer - It allows you to take the image as and when you want it, instead of waiting for the interval timer to go off. - They are a bit cheaper. It's also worth noting that some cameras also have an interval timer feature built into the camera (mine does not unfortunately) so you might not even need to purchase one! I'm afraid I don't know exactly which models have and which don't, so I suggest checking your camera manual before you buy anything. (This does seem to be more common on Nikon than on Canon?) Now I have the cheap and cheerful versions of each of these, but you can get official Canon versions (and indeed Nikons if that is your preference) but they can be more costly. Below are the two I have, and I have had no problems with either (you can get these in different versions that are compatible with different camera models - just search on Amazon for your camera model as mine may not be compatible with your camera) Whichever one you choose, I hope you managed to actually get in some images this year! P.S You can see some images and a pull back of the set up when using my timer remote here.
OPCFW_CODE
👋 A Worldwide Community and Resource Hub for OSPO Practitioners TODO is an open community of practitioners who aim to create and share knowledge, collaborate on practices, tools, and other ways to run successful and effective Open Source Program Offices or similar Open Source initiatives. TODO Group is formed by its Community participants and General Members. 🚀 Learn from a diverse community of professionals with years of experience building OSPOs wordwide “There is no broad template for building an open source program that applies across all industries — or even across all organizations in a single industry. That can make its creation a challenge, but you can learn lessons from other organizations (companies, academic institutions, governments, and more) and bring them together to fit your own organization’s requirements." Founded in 2012, the TODO Group is a place to share experiences, develop best practices, and work on common tooling to improve OSPO adoption and education. 📝 Discover the TODO resources and initiatives TODO resources are open to everyone and available at TODO Group GitHub repo under CC-BY 4.0 Licence. We encourage people to share their knowledge and help to grow this community by adding their contributions to the different TODO initiatives such as: Connect with experienced OSPO professionals and industry supporters across sectors. Choose the best format that better fits your needs! - OSPOlogy: Community Meetings: monthly community meetings with a defined agenda, featured speakers, and informal face-to-face discussions. - OSPO Discussions: general OSPO threads proposed by the community - Slack Conversations: real-time conversations to connect with peers in the OSPO community Course materials to train folks in OSPO management and implementation. - OSPO Training Modules: OSPO 101 is a course on everything you need to know about open source program office management Resources created by experienced professionals to keep learning about OSPOs - OSPO 5-stages Model & Archetypes: This whitepaper provides a set of patterns and directions – and even a checklist! – to help implement an OSPO or an open source initiative within corporate environments. This includes an OSPO maturity model, practical implementation from noted OSPO programs across regions and sectors, and a handful of broad OSPO archetypes (or personas), which drive differentiation in OSPO behavior. - The OSPO Guide: An ongoing set of documents that provides a holistic view and alignment of Open Source Program Office terminology, tasks, and responsibilities, as well as public use cases and learning resources in a cohesive application. - OSPO Newsletter - TODO OSPO guides - OSPO articles and use cases - OSPO Policies Join the OSPO movement! How many OSPOs are out there? Which are the different tools within the OSPO tool infrastructure? Which communities are OSPO supporters? Explore the OSPO Ecosystem and help us build toguether this Landscape to give visibility of OSPOs worlwide! A list of research around open source and OSPOs, as well as relevant quotes. OSPO tooling started by the TODO Group community members: - RepoLinter: Lint open source repositories for common issues. We use a TODO Kanban to manage the different issues and PR from the community
OPCFW_CODE
Sentiment analysis is used to determine whether a given text contains negative, positive, or neutral emotions. It’s a form of text analytics that uses natural language processing and machine learning. Sentiment analysis is also known as “opinion mining” or “emotion artificial intelligence”. AutoNLP is a tool to train state-of-the-art machine learning models without code. It provides a friendly and easy-to-use user interface, where you can train custom models by simply uploading your data. Specify whether to use Word-based BiGRU TensorFlow models for NLP. Specify whether to use Word-based CNN TensorFlow Sentiment Analysis And NLP models for NLP. The data has been originally hosted by SNAP , a collection of more than 50 large network datasets. b. Training a sentiment model with AutoNLP Natural Language Processing allows researchers to gather such data and analyze it to glean the underlying meaning of such writings. The field of sentiment analysis—applied to many other domains—depends heavily on techniques utilized by NLP. This work will look into various prevalent theories underlying the NLP field and how they can be leveraged to gather users’ sentiments on social media. These insights could be critical for a company to increase its reach and influence across a range of sectors. As this example demonstrates, document-level sentiment scoring paints a broad picture that can obscure important details. In this case, the culinary team loses a chance to pat themselves on the back. But more importantly, the general manager misses the crucial insight that she may be losing repeat business because customers don’t like her dining room ambience. But you can see that this review actually tells a different story. Even though the writer liked their food, something about their experience turned them off. Getting started with sentiment analysis in NLP In includes social networks, web graphs, road networks, internet networks, citation networks, collaboration networks, and communication networks . Most languages follow some basic rules and patterns that can be written into a computer program to power a basic Part of Speech tagger. In English, for example, a number followed by a proper noun and the word “Street” most often denotes a street address. A series of characters interrupted by an @ sign and ending with “.com”, “.net”, or “.org” usually represents an email address. His research interests include social computing, machine learning, and natural language processing. It is suggested by Pang and Lee that all objective content should be removed for sentiment analysis. Instead of removing objective content, in our study, all subjective content was extracted for future analysis. A sentiment sentence is the one that contains, at least, one positive or negative word. All of the sentences were firstly tokenized into separated English words. Whether you’re exploring a new market, anticipating future trends, or seeking an edge on the competition, sentiment analysis can make all the difference. Discover how we analyzed customer support interactions on Twitter. Around Christmas time, Expedia Canada ran a classic “escape winter” marketing campaign. - Language detectioncan detect the language of written text and report a single language code for documents submitted within a wide range of languages, variants, dialects and some regional/cultural languages. - If the numbers are even, the system will return a neutral sentiment. - As detailed in the vgsteps above, they are trained using pre-labelled training data. - But more importantly, the general manager misses the crucial insight that she may be losing repeat business because customers don’t like her dining room ambience. - Relying on these traits leaves a lot to gut instinct and luck. - As a result, sentiment analysis is becoming more accurate and delivers more specific insights. We will find the probability of the class using the predict_proba() method of Random Forest Classifier and then we will plot the roc curve. We can experiment with the value of thengram_rangeparameter and select the option which gives better results. Scikit-Learn provides a neat way of performing the bag of words technique using CountVectorizer. I reveal the challenges with semi-supervised learning, best practices, 9 techniques, 16 essential models, and how 3… Learn more about how sentiment analysis works, its challenges, and how you can use sentiment analysis to improve processes, decision-making, customer satisfaction and more. No matter how you prepare your feature vectors, the second step is choosing a model to make predictions. Instead of calculating only words selected by domain experts, we can calculate the occurrences of every word that we have in our language . Just tried out chatGPT’s sentiment analysis and was pleasantly surprised by its accuracy. If you’re in need of a simple, reliable tool for determining the sentiment of text, give it a try. #chatGPT #AI #NLP #GPT3 #artificialintelligence #DataAnalytics #ML pic.twitter.com/m1QkxS54Sp — TREE Industries (@TREE_Industries) December 5, 2022 As the data is in text format, separated by semicolons and without column names, we will create the data frame with read_csv() and parameters as “delimiter” and “names”. As we humans communicate with each other in a way that we call Natural Language which is easy for us to interpret but it’s much more complicated and messy if we really look into it. In this article, we will focus on the sentiment analysis of text data. In this post, we’ll look more closely at how Sentiment Analysis works, current models, use cases, the best APIs to use when performing Sentiment Analysis, and current limitations. XF performed the primary literature review, data collection, experiments, and also drafted the manuscript. JZ worked with XF to develop the articles framework and focus. It supports tokenization, part-of-speech tagging, named entity extraction, parsing, and much more. Luckily there are many online resources to help you as well as automated SaaS sentiment analysis solutions. Or you might choose to build your own solution using open source tools. With irony and sarcasm people use positive words to describe negative experiences. It can be tough for machines to understand the sentiment here without knowledge of what people expect from airlines. In the example above words like ‘considerate” and “magnificent” would be classified as positive in sentiment. Sentiment Analysis with NLP using Python and Flask 1.5 hours 21753 students January 2021 release — Comidoc (@comidoc) December 5, 2022 Human analysts have limited time to process and analyze these data manually, hence Sentiment Analysis is most often used by businesses to gauge audience perception of their brand. This method employs a more elaborate polarity range and can be used if businesses want to get a more precise understanding of customer sentiment/feedback. The response gathered is categorized into the sentiment that ranges from 5-stars to a 1-star. The challenge is to analyze and perform Sentiment Analysis on the tweets using the US Airline Sentiment dataset. This dataset will help to gauge people’s sentiments about each of the major U.S. airlines. Driverless AI performs feature Engineering on the training dataset to determine the optimal representation of the data. - A good deal of preprocessing or postprocessing will be needed if we are to take into account at least part of the context in which texts were produced. - One fundamental problem in sentiment analysis is categorization of sentiment polarity [6,22-25]. - In this article, I will demonstrate how to do sentiment analysis using Twitter data using the Scikit-Learn library. - Sentiment analysis aims to gauge the attitudes, sentiments, and emotions of a speaker/writer based on the computational treatment of subjectivity in a text. - This is where machine learning can step in to shoulder the load of complex natural language processing tasks, such as understanding double-meanings. - You’ll notice that these results are very different from TrustPilot’s overview (82% excellent, etc). A great VOC program includes listening to customer feedback across all channels. You can imagine how it can quickly explode to hundreds and thousands of pieces of feedback even for a mid-size B2B company. A drawback of NPS surveys is they don’t give you much information about why your customers really feel a certain way. They capture why customers are likely or unlikely to recommend products and services. Since tagging data requires that tagging criteria be consistent, a good definition of the problem is a must. Currently, transformers and other deep learning models seem to dominate the world of natural language processing. NLTK or Natural Language Toolkit is one of the main NLP libraries for Python. It includes useful features like tokenizing, stemming and part-of-speech tagging. VADER works better for shorter sentences like social media posts. Is NLP and sentiment analysis the same? Sentiment analysis (or opinion mining) is a natural language processing (NLP) technique used to determine whether data is positive, negative or neutral. Sentiment analysis is often performed on textual data to help businesses monitor brand and product sentiment in customer feedback, and understand customer needs. You’ll need to consider the programming language to use as well. Consider the example, “I wish I had discovered this sooner.” However, you’ll need to be careful with this one as it can also be used to express a deficiency or problem. For example, a customer might say, “I wish the platform would update faster!
OPCFW_CODE
French Long Stay Visa Tracking in India I had applied for my French student/long stay Visa in India earlier this month (biometrics on 4th). I did pay for the sms and e-mail updates, though I have received none to date. Upon using the website link for tracking applications, I was informed that it was forwarded to the appropriate consulate on the 6th of July. I have not heard anything from the VFS or had any change in the update. What should I do to find out the status or is it just enough to wait and have the courier come home one fine day? When I was in this situation there wasn't much I could do besides wait and check my email all the time. My visa was late and I started work 3 weeks late as a consequence. I was finally able to call them and get a human on the phone after a lot of effort, but this was in the UK so YMMV. @lafemmecosmique: even though this could vary by country of application, that's comforting, to be honest. I applied well ahead of time so that I could account for an unforeseen delay. I need to get going only by the end of next month, but it's still a bit unsettling to know that there's no update via e-mail or sms even after signing up for it. The email / SMS thing probably means that they will send you an email and SMS to let you know that your passport is ready. They actually do not process long stay visas in-country; they are sent to France, which takes additional time, and so the processing centre won't have any information for you until they get it back from France. If you already got to the point where your stuff was sent, I don't think you'll be denied. There are just sometimes delays with these things. There was another post here where an OP had good results by getting their university/lab to contact the French govt. @lafemmecosmique: I didn't know the long stay visas weren't processed in-country! They usually give a 20-25 day timeframe on their website, so I was expecting it to be done pronto. @lafemmecosmique if you would add an answer using your comments, I'll upvote. @Dorothy Answer added. Opening disclaimer: This answer is based on personal experience and things I was told at the time, which was in the UK, not from India, and was for a different long-stay visa than yours. At the time, my long-stay visa was very delayed. I was told that there is nothing I could do besides wait. In my case I started work about 3 weeks late. They will email or text (if you sign up for this option) you when your passport is ready to be collected, and they will not specify whether the visa is accepted or rejected. You find that out when collecting your passport, which in my case could be done in person or in the post. If you're really in a pinch, check whether the website you used to start your application has a 'Contact Us' page. You can send an email and request that they call you. They will, sometimes, call you back after that. NB: I was told by the agent that long-stay visas are sent to France (Nantes, IIRC) and processed there. So at some point, to the company, the process becomes a 'black box' and they won't know anything once it's sent, until it arrives back. Note also that the month of August is very slow in France, if you're aiming for around this time. Another poster on this SE had good results getting their university/lab to contact the visa office, but they were a researcher, so this may be less of a valid option for you. If you have gotten to this point already, I would imagine it's quite unlikely that you will be denied. The company has already vetted your application, so it's looking good. Bonne chance!
STACK_EXCHANGE
I recently made an attempt to run MCMC sampling in OpenBUGS using a large dataset and a spatially explicit occupancy model. Here I report some potentially interesting speed and memory issues that I noticed. Model and Data I won't go into technical details of my model as it is not the main focus of this post. In brief, I am modelling a geographic distribution of a certain species as a function of environmental conditions (plus, there is a Conditional Autoregressive component). The model uses data points (grid cells) are spread continuously over the whole United States like this: So there is around 20,000 grid cells within US and I am modelling the probability of occurrence of the species within each of the grid cells. Hardware & software setting The machine I am using is a Linux-operated cluster with 3-GHz individual cores and 192GB of RAM. I am using OpenBUGS to do the MCMC sampling and I call it from within R using the BRugs package. In order to speed things up a bit, I paralellized three MCMC chains using a similar approach to what I described in my older post, using the snow and snowfall packages. Speed and memory issues I will focus only on singe-chain MCMC sampling now. Running a single-chain burn-in (no parameters monitored) in this setting is smooth and, even with this large dataset, relatively fast (~1 hour to run 150,000 burn-in iterations). An interesting issue emerged when I decided to monitor posterior distributions of the predicted values in each of the 20,000 grid cells (in order to obtain map of prediction intervals). A task that is memory-hungry by definition. I monitored elapsed time and memory usage of each OpenBUGS step. Here are the results for two very short MCMC chains (a 100 and b 1000 iterations) during which I monitored predicted values in all of the 20,000 grid cells: First, you can see that the machine spends substantial time on steps like setting the monitored parameters or writing of summary files. It also looks like it takes some time to start and stop the MCMC sampling during the monitoring phase - in fact so much that there is not a substantial difference between a) and b). Second and more importantly, it looks like OpenBUGS allocates 64MB of static memory to the whole procedure and this allocated memory does not change as long its contents are <64MB. This is what happens when I turn the volume of iterations up to 10,000 (c) and even to 650,000 iterations (d; note that I used 10 thinning steps): Obviously, everything takes longer. But look at the memory use - it looks like when the amount of data to store exceeds 64MB, OpenBUGS starts to allocate memory dynamically. And each additional set of iterations is slower than the previous one. Moreover, in case of d) the whole thing crashes after the memory usage reaches 1,7GB. My machine has 192GB of RAM so this can't be the reason. I suspect that there is something clumsy in the way OpenBUGS allocates large chunks of memory. A heap overflow? A colleague of mine remarked that there is a rumor in the BUGS community that some MCMC samplers have a threshold of speed - above certain data size the whole thing all of a sudden goes slow. Is this what happened to me? Do the OpenBUGS developers know about it? Or can the problem be somewhere else? Is there a way to avoid this problem? Any ideas or comments welcomed!
OPCFW_CODE
package transforms import "github.com/dustismo/heavyfishdesign/path" // This shifts the path by the requested amount in X and/or Y type ShiftTransform struct { DeltaX float64 DeltaY float64 SegmentOperators path.SegmentOperators } func (st ShiftTransform) PathTransform(p path.Path) (path.Path, error) { pt := func(p path.Point) path.Point { return path.NewPoint(p.X+st.DeltaX, p.Y+st.DeltaY) } newPath := []path.Segment{} for _, seg := range p.Segments() { s, err := st.SegmentOperators.TransformPoints(seg, pt) if err != nil { return nil, err } newPath = append(newPath, s) } return path.NewPathFromSegments(newPath), nil }
STACK_EDU
M: Ask HN: Is anyone studying for technical interviews? - arjun_tina Just moved to the Bay Area and have just started studying for technical interviews.<p>1. Does anyone want to do mock interviews &#x2F; study with me &#x2F; team up in some other way? Email me: and I&#x27;ll create a group. 2. Any unconventional advice&#x2F;tips for studying or the interview itself? R: collyw No. If you want me for a skillset that I have then interview me on that. If you think that dumb white board questions are relevant, then there is a 50% chance that I will get it right. Your loss if I get it wrong. You know that job isn't going to be related to some algorithmic crap. (At least it doesn't waste too much of my time - compared to some pretty hefty take home tests I have done). After 15 years, there is too much stuff to study for its likely a waste of time. I know the stuff that I know well. I won't bullshit any claims about stuff that i don't know. R: csixty4 Same. Shoot, I've been pulled into the boss's office before and told not to use terms like "big O" on the job because they confuse the junior devs. And I've never needed to balance a tree or reverse a linked list or any of those things since college. It hasn't stopped me from getting projects done. R: theptrk I'm super interested in creating an intensely detailed mind map of what you need to answer these tech interviews. I always hear that the interview process is broken and we need a new system etc but I would imagine we'd want our new coworkers to know how to create an array, and maybe create pointers to it and maybe adjust pointers based on a condition and given an arbitrary algo be able to apply the above skills to a particular set of conditions. And I've always wondered how much better it would be if there was some type of consistent grading to these interviews. Anyway if that's something you're interested in exploring I'd be happy to chat. R: byebyetech iMindMap is a pretty good software. I am using it to build mindmap for iOS interviews. I am also thinking of creating algo related one in future. There seems to be around 50-60ish main patterns in coding problems that are being repeated in almost all of the problems i've seen so far. If you can learn those you can use them as lego bricks to build your solutions. Interview problems have obvious constraints of time, whiteboard space and difficulty. If you master problems that fall within those constraints you can pass any interview. R: addcn One unconventional tip for code challenges comes to mind. A candidate used it on me and it was worth a ton of points in my book. Instead of just doing the challenge ask the interviewer what they want you to optimize for? Performance? Shortness? Readability? Maintainbility? Make it clear there are tradeoffs to each and that the tradeoffs you choose often matter more than the code itself. As a plus you also get insights into what the company values. R: vl623 Aside from reading "Cracking the Coding Interview", I recommend trying out challenges at Hacker Rank, or LeetCode (or anything similar). Do it with the online editor, or even use their questions as practice for whiteboard questions. Whatever you do for a whiteboard, try transcribing that as submission to see how close to real code you wrote. Some companies you interview care about that. R: skinc Hey, I work at Karat and we offer free mock interviews for anyone working on their technical interviewing skills: karat.io/practice We're also always hiring interviewers. If you love interviewing and/or are looking for well-paying, remote work, shoot me an email at R: sitkack My highest recommendation is [https://www.pramp.com](https://www.pramp.com) Also, being able to intelligently explain * top 10 algorithms of all time * top 20 "popular" technologies Sign up a for an online judge [1] and do a couple easy to medium questions each morning. [1] [https://en.wikipedia.org/wiki/Online_judge](https://en.wikipedia.org/wiki/Online_judge) R: arjun_tina Great - thanks for sharing R: Goosey Highly encourage practicing with interviewing.io I'm not affiliated with them, just really like what they are doing. R: awaythrow101 Echoed. I got into their system and was able to get some real interviews through the platform as well. When you get in, they give you three "guaranteed" interviews that you can schedule essentially any hour of the day with at least 24 hours notice. I _suspect_ that they pay their interviewers for these interviews (hence the guaranteed nature of them), but don't quote me on that. After those three interviews, your available interview slots drop off dramatically; currently there is a two week wait period. If you do well enough (appears to be top 10%), they will start acting as a recruiter; you can have real tech screens through their platform anonymously, and "unmask" and go onsite if the screen goes well. I'm not sure what the filter is for letting people in but I suspect it's fairly manual right now, especially if they're paying interviewers for the three guaranteed slots people get. Keep in mind that Gainlo charges $100+ _per interview_ for the same service. I actually got better feedback from interviewers on interviewing.io than the one interview I did from Gainlo (YMMV of course). Google remote onsite on Friday! R: arjun_tina Good luck for the onsite :) R: sharadov What position will you be interviewing for? R: arjun_tina Software Engineer (general) and iOS Engineer R: wingerlang I made a service that sends you 1 iOS interview question per day. It includes both technical and experiences questions. It is still in "early alpha" and not exactly polished, but I take feedback and try to improve it. [http://interviewq.io/](http://interviewq.io/) The purpose is to get a question per day and it is up to you to see if you can answer it or not. If you cant answer it, then you've identified something to study / think about. R: arjun_tina Signed up. Cool idea!
HACKER_NEWS
If an illegal hacker wants to do something to your system, such as plant a virus, a Trojan horse program or spyware, he has to gain access to the system's root directory and the unlimited power that goes with that access. Once established as root, the intruder can modify system commands to hide his tracks from the systems administrator and preserve his root access. The easiest way to do this is via a rootkit. Generally, a hacker obtains normal, user-level access to a computer or network by guessing or stealing a password or exploiting some known vulnerability. Then he finds a way to collect user identities and passwords to other machines on that network while simultaneously erasing all evidence of his activity. Years ago, the hacker would have done this by exploiting his direct knowledge of and experience with the system and his personal programming skills. Today the job is simplified - the hacker can use one of many available rootkits that pretty much automate the process. Originally, the term rootkit referred to a set of modified and recompiled Unix tools (typically including ps, netstat and passwd) designed to hide any trace of the intruder's presence or existence. David O'Brien has traced the lineage of rootkits back to the early 1990s, when Solaris and Linux operating systems were the primary targets. Rootkits are no longer limited to Unix-like systems; similar tools are available for other operating systems, including Microsoft Windows. The name rootkit may suggest a set of canned attack scripts for obtaining root access, but this is not really the case. A rootkit may include programs to monitor traffic, create a back door into the system, alter log files and attack other machines on the network. In almost all cases, a rootkit itself causes no direct damage. Instead, its function is to mask the presence of other types of (usually malicious) software, such as keylogging Trojan horses, viruses or worms. Rootkits do this by hiding or removing traces of log-in records, log entries and related processes. Some rootkits replace the binary files for system commands with modified versions designed to ignore attacker activity in order to escape detection. For example, on a Unix or Linux system, the rootkit may replace the list files command (ls) with one that ignores files located in specified directories. Or it may replace the ps command, which lists processes running on the system, with a similar command that ignores any processes that the attacker has started. Programs that log system activities can be similarly modified, so that when the systems administrator checks the logs, everything looks normal despite the fact that the system has been compromised. Both rootkits and computer viruses modify core software components, inserting code to hide their presence and perform some additional function (what is called the payload). The key difference is that the computer virus attempts to spread itself to other systems, whereas a rootkit generally limits itself to a single system. The rootkit's payload attempts to maintain the integrity of the rootkit itself -- i.e., to ensure that the target system remains compromised. For example, every time a computer runs one of the rootkit's commands, the rootkit also checks to see that other system commands on that machine are still compromised and reinfects them as necessary. The rest of the payload generally involves back doors, hidden command-line switches or "magic" environment-variable settings that circumvent normal access controls. A rootkit sitting inside one of your systems is prima facie evidence that your system has been hacked, and it's something you want to know about. One of the rootkit's main goals is to hide its very existence, but you can detect user-mode rootkits, which accomplish their task by replacing binaries, by looking for changes in the size, date and checksums of key system files. Kernel-mode rootkits are harder to find, because they take advantage of Unix's (or Linux's) ability to load kernel extensions on the fly. These rootkits sit deep inside the operating system, intercepting system calls from legitimate programs and returning only the data the attacker wants you to see. The fundamental problem in detecting rootkits is that you can't trust your operating system. You can't believe what the system tells you when you request a list of running processes or files in a directory. One way to get around this is to shut down the suspect computer and check its storage after booting from alternative media that you know are clean, such as a rescue CD-ROM or a dedicated USB flash drive. A rootkit that isn't running can't hide its presence, and most antivirus programs will find rootkits by comparing standard operating system calls (which are likely to be altered by the rootkit) against lower-level queries, which ought to remain reliable. If the system finds a difference, you have a rootkit infection. How do you get rid of a rootkit infection? Removing rootkits presents two distinct problems: removal of the rootkit itself, then removal of the payload the rootkit was hiding. Because rootkits change the operating system, you might not be able to remove the rootkit without causing the system (especially a Windows machine) to become unstable. Russ Cooper, founder of the NTBugtraq mailing list, notes that "only a person with very little knowledge would try to remove a rootkit." Ultimately, the only safe and foolproof way to handle a rootkit infection is to reformat the hard drive and re-install the operating system.
OPCFW_CODE
filesystem fill up time One reason why I still have the host stats dashboard is because it has this neat little table of "Filesystem Fill Up Time" which (tries to?) compute the time at which the filesystem will fill up. I don't think it's working very well because the results are just off here. But it got me thinking about how this could be implemented and whether you'd be interested in adding this to the dashboard... The hosts stats dashboard uses this formula: (node_filesystem_size_bytes{job='node',instance='$instance'} - node_filesystem_free_bytes{job='node',instance='$instance'}) / deriv(node_filesystem_free_bytes{job='node',instance='$instance',fstype!='rootfs',mountpoint!~'/(run|var).*',mountpoint!=''}[3d]) > 0 This blog post suggests instead just using the derivative as a base: (deriv(node_filesystem_free{device=~"/dev/sd.*",instance=~"$node:.*"}[4h]) > 0) I would suggest using node_filesystem_avail_bytes in any case, as that is the user-visible metric that will detect actual failures in userspace... I'm not very familiar with Prometheus formulas, so I'm not sure how it works. I suspect it just doesn't, because it gives me negative numbers here (they don't show up) or absurd estimates<PHONE_NUMBER>47366 year for a 99% full disk), etc. Yet this could be an interesting addition. Hi Anarcat, Thanks for the upgrade proposal, it looks nice. I've been testing both formulas and the first one seems to work better, but take note that it only reports content if the values are "> 0", if not, the box it will be empty. The second formula doesn't report good values, in my testing lab, 11ms in a filesystem without changes. In any case, if you want to test it, the corrected formula is: deriv(node_filesystem_avail_bytes{instance=~"$node:$port",job=~"$job",device!~'rootfs'}[4h]) > 0 Please, check the last commit on node-exporter-full.json it have the new box under "CPU Memory Net Disk", you can move it to other place without problem. Regards, that looks okay, but I still find some strange things going on. take this graph for example: This gives the following table: Metric Current /boot 2.39 day /boot/efi 142257726.77 year There are many problems here, the first of which of course is the host isn't continuously available (it's a workstation, and it shuts down once in a while). But then the other filesystems (I'm specifically interested in /, /home and /srv) do not show up, because of the > 0 constraint. When I shift the time range in grafana from the default (5 minutes?) to three days, all of a sudden, the estimates show up for the other partitions: Metric Current /boot 2.40 day /home 10.80 week /srv 14.05 week /boot/efi 141341662.68 year Here's the raw unprocessed output from Prometheus doing the query ((node_filesystem_size_bytes{device!~'rootfs'} - node_filesystem_avail_bytes{device!~'rootfs'}) / deriv(node_filesystem_avail_bytes{device!~'rootfs'}[3d])): Element Value {device="/dev/mapper/curie--vg-home",fstype="ext4",instance="curie:9100",job="node",mountpoint="/home"} -1996694.4176633644 {device="/dev/mapper/curie--vg-root",fstype="ext4",instance="curie:9100",job="node",mountpoint="/"} -14338319.371650279 {device="/dev/mapper/fedora_crypt",fstype="btrfs",instance="curie:9100",job="node",mountpoint="/srv"} -38339918.49760082 Notice how Prom thinks those numbers are negative. I would also point out that it's somewhat unlikely that (for example) /srv runs out of space in 14 weeks: it gained only 0.4% of space in the last three days, which, if I do a napkin rule-of-three, means it would gain 13% in 14 weeks (0.4147/3), bringing it to 90% disk usage... So I'm not sure those derivatives are that useful in predicting the future. There might be something fishy going on here... I find it especially strange that the estimates would vary based on the Grafana time range... Another example of the estimate failing, on my home server: Metric Current /var 45.23 week / 1.33 year /usr 1.51 year /tmp 2.69 year /home 118.48 year /boot 2491947820794.12 year /srv 117533249733896.45 year Here's the absolute numbers: And relative: As you can see, /srv is quiiite full and a specific concern I was trying to address ("how much time do I have left with that poor HDD")... the answer (10^14 years, 10^5 times more the age of the universe) is ... rather unlikely. ;) In fact, maybe we should use the infinity symbol (∞) instead of anything larger than the age of the universe (10^9)... Maybe I'm just proving how useless those metrics are, sorry for thinking out loud. :) Well, it's a fact that the formula doesn't work as expected. As the original was made by Robust Perception, maybe @brian-brazil or @Conorbro can said something about it and help us? Could you check if predict_linear function return results in your case: https://www.robustperception.io/reduce-noise-from-disk-space-alerts#more-614 predict_linear(node_filesystem_free{instance=~"$node:$port",job=~"$job",device!~'rootfs'}[1h], 4 * 3600) < 0 I'm testing it, but I don't get any result in my setup... from what i understand, predict_linear tries to find the value at a specific time. we're looking for the opposite: the time for a specific value (namely, "zero space left")... Do you finally find any working solultion? If the dashboard isn't reliable, I think that it's better to remove it. i haven't, unfortunately, and i agree. :/ I am using a similar query to find filesystem usage but i get error ("1:2: parse error: unexpected character: '\ufeff'") when i try to filter for just one node/instance. ( 1 - (node_filesystem_free_bytes{device!~'rootfs'} / node_filesystem_size_bytes{device!~'rootfs'})) * 100 * on{(instance="dbst123")} group_left(nodename) (node_uname_info) below is the original query i am using but this gives data for all the nodes that are registered to my PMM. ( 1 - (node_filesystem_free_bytes{device!~'rootfs'} / node_filesystem_size_bytes{device!~'rootfs'})) * 100 * on(instance) group_left(nodename) (node_uname_info) Any idea on how to filter specific nodes? Small brain dump as I looked into this. https://promcon.io/2022-munich/talks/tamland-how-gitlabcom-uses-long-/ https://gitlab.com/gitlab-com/gl-infra/tamland seem to be the way to go. Ref that pointed me into this direction: https://github.com/prometheus/prometheus/discussions/11705#discussioncomment-4388537 About the formulas used here, I reworked them into: node_filesystem_avail_bytes{job='node',instance='$instance'} / (delta(node_filesystem_avail_bytes{instance='$instance'}[1d]) * -1) > 0 Which gives daily until full but only for linear disk usage change. Very poor approach compared to Tamland.
GITHUB_ARCHIVE
Multimodal (Audio, Facial and Gesture) based Emotion Recognition Challenge People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research one motion recognition as well as human-machine interaction. In this competition, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman’s emotion categories. The participants will have to analyze all 3 modalities and, based on all 3 modalities, perform the emotion recognition. The participants must submit the code and all dependencies via codalab and the organizer will run the codes. The evaluation would be based on the average correct emotion recognition using each modalities as well as all 3 modalities together. In case of equal performance, the processing time will be used in order to indicate the ranking. The Training data will be provided followed by the validation dataset. The test data will be finally launched with no label and it will be used for the evaluation of participants. List of organisers - Dorota Kaminska and Tomasz Sapiński - Lodz University of Technology, Poland - Kamal Nasrollahi - University of Aalborg, Denmark - Hasan Demirel - Eastern Mediterranean University, Turkey - Cagri Ozcinar - Trinity College Dublin, Ireland - Gholamreza Anbarjafari - iCV Lab, University of Tartu, Estonia SIMAH (SocIaL Media And Harassment): First workshop on categorizing different types of online harassment languages in social media The proposed competition focusing of online harassment in Twitter in English. It has two related tasks: the first task is a binary classification to classify online harassment tweets versus not_harassment tweets, the second task is multi-class classification of online harassment tweets into three categories of “Indirect harassment”, “sexual harassment” and “physical harassment”. List of organizers - Sima Sharifirad - Department of computer science, Dalhousie University, Halifax, Canada. - Stan Matwin - Department of computer science, Dalhousie University, Halifax, Canada. Correcting Transiting Exoplanet Light Curves for Stellar Spots. The field of exoplanet discovery and characterisation has been growing rapidly in the last decade. However, several big challenges remain, many of which could be addressed using machine learning and data mining methodology. For instance, the most successful method for detecting exoplanets, transit photometry –measuring the faint decrease in incoming stellar light as an exoplanet passes between the Earth and a target star– is very sensitive to the presence of stellar spots and faculae. The current approach is to identify the effects of spots visually and correct for them manually or discard the data. As a first step to automate this process, we propose a regular competition on data generated by ArielSim, the simulator of the European Space Agency’s upcoming Ariel mission, whose objective is to characterise the atmosphere of 1000 exoplanets. The data consist of pairs of light curves corrupted by stellar spots and the corresponding clean ones, along with auxiliary observation information. The goal is to correct the light curve for the presence of stellar spots (signal denoising). This is a yet unsolved problem in the community. Solving it will mean improving our understanding of the characteristics of currently confirmed exoplanets, potentially recognising false positive / false negative detections and improving our ability to analyse new observations – primarily but not limited to those expected from Ariel– without the need to equip new telescopes with additional instruments with all the extra costs this implies. List of organizers - Nikolaos Nikolaou - UCL, England - Ingo P. Waldmann - UCL, England - Subhajit Sarkar - University of Cardiff, Wales - Angelos Tsiaras - UCL, England - Billy Edwards - UCL, England - Mario Morvan - UCL, England - Kai Hou Yip - UCL, England - Giovanna Tinetti - UCL, England As part of the AutoDL challenges, the AutoCV2 challenge aims at finding fully automated solutions for classification tasks in computer vision. Compared to the recent AutoCV challenge, AutoCV2 challenge targets not only image classification tasks, but also video classification tasks. Participants need to make code submissions containing machine learning code that is trained and tested on the CodaLab platform, without human intervention whatsoever, with time and memory limitations. All problems are multi-label classification problems, coming from various domains. Raw data is provided, but formatted in a uniform manner, to encourage participants to submit generic algorithms. List of organisers - Sergio Escalera - U. of Barcelona / Computer Vision Center Barcelona, Spain - Isabelle Guyon - ChaLearn, USA - Inria / Université Paris-Saclay, France - Zhengying Liu - Inria / Université Paris-Saclay, France - Wei-Wei Tu - 4Paradigm, China The full list of organizers is: AutoDL preparation team - University Paris-Saclay: - University Barcelona - ChaLearn directors involved in the project: - ChaLearn collaborators: - Volunteers and interns:
OPCFW_CODE
imageMagick convert from a folder to another command in Linux? I have the two folders under the same level directory folder1 and folder 2 now I try to convert all images in folder1 to folder2 and with same file name. here is what i have now: for f in folder1/*.jpg do convert $f -resize 80%X80% +profile "*" -quality 85 folder2/$f done and it throws the following message from each file it tried to convert: convert: unable to open image `folder1/folder2/st-3474827-1.jpg': No such file or directory @ blob.c/OpenBlob/2440. and I know its directory problem but google for two days already still dont know how to fix it. Can you help me with this? Do any of the filenames contains spaces? There's two ways you could deal with it. Use mogrify: mogrify -path folder2 -thumbnail 50x50 folder1/*.jpg Use basename: for filename in folder1/*.jpg; do basename="$(basename "$filename" .jpg)" convert "folder1/$basename.jpg" -thumbnail 50x50 "folder2/$basename.jpg" done The former option is probably better, but the latter may be clearer. @shakabra: I actually had no idea about the -path option until you posted it in your answer, so thank you for that. I saw it, tested it, and integrated it into my answer. Your second example using basename worked fine for me only after I removed the quotation marks surrounding the definition of basename variable and those surrounding the options for convert. I've tested it on fedora with either sh and bash. I posted a couple of wrong answers. Sorry I just assumed that mogrify or convert would behave well with outputting to a different path. Shouldn't assume. I don't know if Imagemagick can output to a different directory. You may have to just mv the output(no suggestions here) to the new directory. I tried this( I was trying to be as simple as possible without any loops... convert -resize 500x500 * ../test2/ #output was 2 files named -0 -1 in the new dir mogrify -resize 200x150 ~/Pictures/rome/test/*.jpg -path ~/Pictures/rome/test2/ # gotta *.jpg in new dir YAY! mogrify -resize 200x150 -path ~/Pictures/rome/test2/*.jpg ~/Pictures/rome/test/*.jpg # blob error The manpages don't mention output to a new directory and I searched around a little. The exception codes for ImageMagick are here Good luck and sorry for the faulty gouge. I will edit if I find anything new.
STACK_EXCHANGE
<?php namespace Aternos\Codex\Test\Tests\Analysis; use Aternos\Codex\Analysis\Analysis; use Aternos\Codex\Test\Src\Analysis\TestInformation; use Aternos\Codex\Test\Src\Analysis\TestInsight; use Aternos\Codex\Test\Src\Analysis\TestPatternProblem; use Aternos\Codex\Test\Src\Analysis\TestProblem; use PHPUnit\Framework\TestCase; class AnalysisTest extends TestCase { public function testSetGetInsights(): void { $analysis = new Analysis(); $insight = new TestInsight(); $this->assertSame($analysis, $analysis->setInsights([$insight])); $this->assertSame([$insight], $analysis->getInsights()); } public function testAddInsight(): void { $analysis = new Analysis(); $insight = new TestInsight(); $this->assertSame($analysis, $analysis->addInsight($insight)); $this->assertSame([$insight], $analysis->getInsights()); } public function testGetProblems(): void { $analysis = new Analysis(); $problem = new TestProblem(); $information = new TestInformation(); $analysis->addInsight($problem); $analysis->addInsight($information); $this->assertEquals([$problem], $analysis->getProblems()); } public function testGetInformation(): void { $analysis = new Analysis(); $problem = new TestProblem(); $information = new TestInformation(); $analysis->addInsight($problem); $analysis->addInsight($information); $this->assertEquals([$information], $analysis->getInformation()); } public function testKey(): void { $analysis = new Analysis(); $problem = new TestProblem(); $information = new TestInformation(); $analysis->addInsight($problem); $this->assertEquals(0, $analysis->key()); $analysis->addInsight($information); $this->assertEquals(1, $analysis->key()); } public function testCount(): void { $analysis = new Analysis(); $problem = new TestProblem(); $information = new TestInformation(); $this->assertEquals(0, $analysis->count()); $analysis->addInsight($problem); $this->assertEquals(1, $analysis->count()); $analysis->addInsight($information); $this->assertEquals(2, $analysis->count()); } public function testAddingTheSameInsightIncreasesInternalCounter(): void { // Adding the same insight to an analysis does not add it to the insights, and therefore it // does not increase the counter of the analysis, but the internal counter of the insight. // See Analysis->addInsight() $analysis = new Analysis(); $problem = new TestPatternProblem(); $problem2 = new TestPatternProblem(); $analysis->addInsight($problem); $this->assertEquals(1, $analysis->count()); $this->assertEquals(1, $problem->getCounterValue()); $analysis->addInsight($problem2); $this->assertEquals(1, $analysis->count()); $this->assertEquals(2, $problem->getCounterValue()); } public function testOffsetExists(): void { $analysis = new Analysis(); $information = new TestInformation(); $this->assertArrayNotHasKey(0, $analysis); $this->assertEquals(0, $analysis->count()); $analysis->addInsight($information); $this->assertArrayHasKey(0, $analysis); $this->assertEquals($information, $analysis[0]); } public function testOffsetGet(): void { $analysis = new Analysis(); $information = new TestInformation(); $analysis->addInsight($information); // Exists $this->assertEquals($information, $analysis[0]); // Does not exist -> "undefined array key" error $this->expectError(); $this->assertEquals(null, $analysis[1]); } public function testOffsetSet(): void { $analysis = new Analysis(); $information = new TestInformation(); $this->assertArrayNotHasKey(0, $analysis); $this->assertEquals(0, $analysis->count()); $analysis->addInsight($information); $this->assertArrayHasKey(0, $analysis); $this->assertEquals($information, $analysis[0]); // Overwrite $information on $analysis[0] using the offsetSet $problem = new TestProblem(); $analysis[0] = $problem; $this->assertEquals($problem, $analysis[0]); } public function testOffsetUnset(): void { $analysis = new Analysis(); $information = new TestInformation(); $this->assertArrayNotHasKey(0, $analysis); $this->assertEquals(0, $analysis->count()); $analysis->addInsight($information); $this->assertArrayHasKey(0, $analysis); $this->assertEquals($information, $analysis[0]); // Unset $information on $analysis[0] using the offsetUnset unset($analysis[0]); $this->assertArrayNotHasKey(0, $analysis); $this->expectError(); $this->assertEquals(null, $analysis[1]); } }
STACK_EDU
We are happy to announce the release of the automatically annotated Sahidic Old Testament corpus (corpus identifier: sahidic.ot), based on the version of the available texts kindly provided by the CrossWire Bible Society SWORD Project thanks to work by Christian Askeland, Matthias Schulz and Troy Griffitts. The corpus is available for search in ANNIS, much like the Sahidica New Testament corpus, together with word segmentation, morphological analysis, language of origin for loanwords, part of speech tagging and automatically aligned verse translations (except for parts of Jeremiah). Please expect some errors, due the fully automatic analysis in the corpus. The aligned translation is taken from the World English Bible. Here is an example search for the word ‘soul’: You can also read entire chapters in ANNIS or at our repository, which look like this: We hope that this resource will be helpful to Coptic scholars – please let us know if you have any questions or comments! We have concluded our round of “startup” funding from the National Endowment for the Humanities Office of Digital Humanities. Our White Paper documents our activities and our outcomes for the period, including the following grant products: - A Digitized Coptic Corpus in Multiple Formats and Visualizations - Digital and Computational Tools (tokenizer, part of speech tagger, lemmatizer, and more and more) - ANNIS Database instance to query and search the multilayer corpus - Documentation in the toolsets, on our wiki, and on our blog - Web application for users to reading and cite visualizations of textual data - Symposium and workshop (“Digital Coptic 2,” March 2015) at Georgetown U + public tutorial and workshop at the Coptic Congress - Articles and conference papers to distribute the results of our work CHECK IT OUT! We heartily thank the NEH ODH for its support, as well as the NEH Preservation and Access division for their concurrent grant. We also thank all of our participants, contributors, and collaborators, who are numerous and are outlined in the White Paper. White Paper for NEH ODH Startup Grant See also our White Paper for the P&A grant submitted in August. We at Coptic SCRIPTORIUM have been fortunate to have received three grants from the National Endowment for the Humanities for our work. We cannot thank the NEH enough for its support. So much of what we have done over the past 2+ years could not have happened without this funding. We just completed a White Paper paper for a Foundations grant from the Humanities Collections and Reference Resources program in the Division of Preservation and Access. The grant, “Coptic SCRIPTORIUM: Digitizing a Corpus for Interdisciplinary Research in Ancient Egyptian,” ran from May 2104 until now. Our White Paper documents our work and especially the standards and practices we developed for digitizing a pilot Coptic corpus. If you want to know more about what truly interdisciplinary DH work looks like, check it out. We try to break down the complexities of creating a digital corpus for research in linguistics, history, religious studies, biblical studies, manuscript studies. We’ve got data models, workflows, digitization standards, transcription guidelines, and more all laid out for you here. There is so much more to do; this is a only start. Thanks to everyone who has had faith in our work. White Paper, NEH Grant PW-51672-14 (Preservation and Access): “Coptic SCRIPTORIUM: Digitizing a Corpus for Interdisciplinary Research in Ancient Egyptian” 29 August 2016 Amir Zeldes and Caroline T. Schroeder have recently published an article in Digital Humanities Quarterly about the need for digital tools and a digitized corpus for Coptic, and research questions that drive Coptic SCRIPTORIUM. “Raiders of the Lost Corpus” is freely available on the DHQ website as part of a special issue on Digital Methods and Classical Studies edited by Neil Coffee and Neil W. Bernstein. Schroeder presented an earlier version of this paper at the Digital Classics conference at the University at Buffalo in 2013.
OPCFW_CODE
// Alex - - JS 'use strict'; $(document).ready(function() { let i=0; let sWord= new Array(); sWord[0]='apple'; sWord[1]='bank'; sWord[2]='cat'; sWord[3]='dog'; sWord[4]='eagle'; let sText = new Array(); sText[0] = ' a round fruit with firm, white flesh and a green, red, or yellow skin.'; sText[1] = ' an organization where people and businesses can invest or borrow money, change it to foreign money, etc., or a building where these services are offered.'; sText[2] = ' a small animal with fur, four legs, a tail, and claws, usually kept as a pet or for catching mice.'; sText[3] = ' a common animal with four legs, especially kept by people as a pet or to hunt or guard things.'; sText[4] = ' a large, strong bird with a curved beak that eats meat and can see very well.'; function runScreen(){ i++; if (i >=sText.length){i=0;} $("p").text(sWord[i]); $("div").text(sText[i]); }; let timer = setInterval(runScreen, 3000); });
STACK_EDU
Hi there, I’ve been searching for a building footprint shapefile for the following cities, but haven’t had any luck: Cleveland, Dallas, Kansas City, Las Vegas, Phoenix, Providence, San Diego, Sea... Trying to add new data from an excel file into an existing layer with same information in Arcmap. How would a go about doing that? I need to convert a Tiff to a shapefile. I have arcmap 10.3.1 and haven't been able to determine how. Was wondering if someone can help me. I'm adding about 20 new gps points from excel and when everything comes over all the data comes except the Longitude column shows all zeros instead of what it's suppose to say. An... Hello everyone I am working in spatial data analyzing using Moran I for 57 countries , I generated some weights matrix manually by Excel depending on some economic indicators, but i face a problem that ca... I am fairly new to ArcGIS Pro. I have created an offline Mobile Map using ArcGIS Pro 2.3.0. which I am developing for a waypoint mapping project in Android. The Mobile Map uses the Imagery basemap.... I have parcel layer and table of water meters, they are related. Now I want to symbolize parcels based on water meter district ID it is just two digits. I was able to do so by joining tables, but if there is... I have noticed that images in a Raster Mosaic are smoothed in ArcMap and rasters in a file are not. This "smoothing" is a display function and I've seen it in other software like Erdas but I don't recall what it was a... In ArcPro, I'm trying to set the mid value of a divergent color ramp to "0". (And I have spent enough time searching the web for a solution...) This is very easy in ArcMap: Raster layer properties > Symbology tab ... Hey ya'll, When I create the chart I remove all the axes labels, but when I insert it on a Layout the axes labels appear again, and there is a huge gap of wasted space. The labels are too long to read and it messes wi... My .SHP files aren't loading into my Project file (.mxd). I have an exclamation point next to the layer name so I clicked on the exclamation point and tried to reset the data source with no success. Also the projec Th... I am wondering what the options are for doing drive time analysis - for example seeing how many people live within x minutes of a point. I currently have the most basic version of ArcMap (without the... There is no such setting in Project Options - Raster and Imagery. I want to load every image without stretching by default. In ArcMap, it was possible to set "no stretch" as the default. Hyperlinks in a web viewer. Our office recently got new windows 10 computers and can no longer use hyperlinks in a browser to local files such as asbuilt tif images or videos of sewer lines. ... I am working on organizing and cleaning some lake bathymetry measurements. In my data sets, the lake boundaries were digitized, and the vertices of the boundaries were used as points where the depth value was ze... I am trying to find how many people (from a dataset) live within 30 minutes of driving from the centroid of a ZIP code. I'm currently mapping parking spaces along city rights-of-way (mostly parallel parking). What I'm trying to do (if it's possible) is to draw a line along each roadway section, use the Construct Points tool ... I have a MXD where I first exported the map to a JPEG and clipped the output to the graphics extent. I then tried to export it as a PDF, and now it clips the image in the PDF to the just show the graphics like th... I have a GEBCO bathymetry dataset that has stretched symbology (1 - 6000m). I only want to display legend depths that are visible in the current extent, not the full range of values. This is works fine for... And when do you use none as the input in the dialog window?
OPCFW_CODE
Here’s the Deal Slant is powered by a community that helps you make informed decisions. Tell us what you’re passionate about to get your personalized feed and help others. I'm in the middle of converting a game from FMOD to Wwise because -- even though the FMOD Studio interface is much nicer and easier to understand (for this indie programmer who is also pretending to be the audio designer) -- the FMOD integration with Unity (for all but the simplest of SFX) became too frustrating and broken to tolerate. Which, is too bad, because I feel that FMOD's simpler presentation could be a better fit overall for indie projects (although it still has a huge learning curve). But, in reality, I am much MUCH happier with Wwise's Unity integration even though the Wwise interface is crazy complex (and a bit annoying to look at too). The engineering of the two Unity integrations isn't even remotely similar and, so far, Wwise's is much more logical and easier (in spite of some real head-scratchers). The downside is that Wwise appears to be much buggier. Just a couple days in and I've already crashed Wwise once and had to restart Unity twice over Wwise issues (one due to a memory leak from Wwise stuff, and one because the Wwise components couldn't deal with renaming a SoundBank in Wwise). FMOD lacks some logical functionality but it never crashed hard like this either. Maybe I'll say more once I finish and know more. And the bugs keep coming... so, now it's a race to the bottom. Wwise loses some settings in Unity on occasion and also wiped all of them on an upgrade. Wwise also increased the in-editor compile time from about 10 seconds to 45 seconds! which is a huge drag (FMOD's increase was just a few seconds). And, yet, still prefer Wwise because it is much easier to code for. See More While there is plenty to read in their docs, there is not much help in for getting started with implementing game code. Particularly, with game engine integrations and best practices for working in different ways (components vs code vs both) with each. The docs in general are weak on crosslinking, screenshots, and example code. Some integration stuff simply isn't documented anywhere and requires trial and error in hopes to figure out. Well, their course materials have some more info but they are pain to try to use as reference material (very long and very basic). See More How many languages are needed to write a game??? Python must be installed to convert Wwise_IDs.h to Wwise_IDs.cs. Really?! Shouldn't that script have been written in C# to begin with? Or, any other Unity-compatible language or DLL for that matter. Or, just skip the .h and later conversion and provide an option to generate a .cs file in the original function. How hard could that be? Then again, Wwise Types pretty much eliminate the need for Wwise_IDs.cs if you choose to use them. See More As compared to FMOD's integration with Unity, Wwise's makes soooooooooo much more sense! From their Unity components, to Wwise Types for simplified access the API, or just straight-up API calls, it all just makes much more sense than FMOD's confusing and incomplete Unity component implementation. Too bad the documentation for this is sorely lacking and the Unity components waste too much UI space by wrapping every field in a frame for no apparent reason. See More Wwise Launcher is one of those pain-in-the-butt applications that wants to help you use the various features of Wwise but usually just gets in the way requiring more startup clicks than necessary to get things done. It logs you out at least once a day (even if you check "Keep me logged in") and can become uncooperative very quickly if you are not logged in. I have no idea why it is so demanding about being logged in in the first place. If you're working offline then parts of it will just spin forever leaving you unsure how what to do next. Expect to restart it regularly if you move around a lot. On the other hand, it handles things like upgrades and game integrations really well... but these are also actions that are rarely executed and don't need to be in a project-blocking application. See More Which can also be fairly daunting to use as this forces stuff to be buried in tabs, popup dialogs, and other views. It is hard, for example, to see the big picture of a single event like you can in FMOD. But, it feels like you also have more control over details because of it. Thus, Wwise appears more geared towards hard-core sound engineers. See More The smallest chance of random play is 1%. This may be fine enough for most situations but if, for example, you have 20 clips to randomize and you want one of them to play 1:10 of the rest, you would need ~0.5%. And if you want an Easter-egg event at 1:100, you can't get even close to that. You'll have to do it in code instead of in the studio. See More Everything was going great with FMOD until I tried getting one-shot events to respond to parameters. If you set up your Unity project as per their instructions, one-shot events will only respond to parameter changes once in a blue moon (just to confuse you) when, according to their support, they aren't supposed to at all with Unity components, even though their docs never say this and actually seem to say quite the opposite. Thus, you can either hack you way around this caveat or implement these events without components through some very overly wordy code. Thus, you might as well do all events through code if you want some predictability in maintenance. Talking through this with support was like slogging through mud. They just can't really see what the problem is with that design split. They just don't seem to care much that you can't trigger all events properly through Unity components like they say you can. It's so irritating I'm shopping for a new audio engine even though it is very late in the game for that. If we have to re-implement all of our events another way, we might as well rebuild all of them with a more logical tool if we can. Update: currently in the middle of switching to Wwise and liking it much better even though it is far from perfect (and its UIs fairly buggy so far). See More Compared to Wwise, for example, which splits the parameter curves that may be affecting your event into at least three different views, FMOD does a good job of flattening the hierarchy, simplifying details into nice UI gadgets, and presenting all this on one view. This may create a bit of a disconnect in understanding the hierarchy... but not really. Details may be hidden because of this but those probably aren't missed by your average developer. Showing the waveforms of the clips everywhere possible also helps. The contrast and readability of the UI is also nice. See More The instructions on how to set up events in Unity work for all events except one-shot events that need to respond to a parameter. These events should be implemented through code instead of using the FMOD Unity components (like you can for all other events). This makes no sense from a design point and is not made clear in any documentation. You might as well implement all of FMOD through code in Unity and ignore the clumsy Unity components. See More Help millions of people make better decisions. Each month, over 2.8 million people use Slant to find the best products and share their knowledge. Pick the tags you’re passionate about to get a personalized feed and begin contributing your knowledge.
OPCFW_CODE
New in version 0.2.0 The bridge supports using a Telegram bot to relay messages for unauthenticated users, allowing Matrix users who are not logged into their Telegram account to chat with Telegram users. - If you haven't yet, create a new bot on Telegram by chatting with @BotFather. Make sure you disable privacy mode using BotFather's /setprivacycommand in order to allow the bot to read messages in groups. - Configure the bridge to use the bot you created by setting the token you got from BotFather in the bot_tokenfield in the bridge's config. - Restart the bridge and check status with the !tg ping-botcommand on Matrix. - Invite the relaybot to groups where you want it to bridge messages from unauthenticated Matrix users. If you're logged in to the bridge, you can use !tg ping-bot, click the user pill and click invite directly. If not, you can add the bot on the Telegram side. If the room was created by the bridge and you don't have invite permissions, you can either use !tg set-pl to give yourself permissions, or !invite <mxid> to invite users through the bridge bot. You can also create portals from Telegram if you have the relay bot set up and have allowed creating portals from telegram in the config ( authless_portals). Simply invite the relay bot to your Telegram chat and use /portal command. If the chat is public, the bot should create the portal and reply with a room alias. If the chat is private, you'll need to invite Matrix users manually with The format of messages and membership events that the bot sends to Telegram can be configured both bridge-wide and per-room. Per-room configs can be managed using the !tg config command. For example, to disable bridging of membership events in a room, you can run !tg config set state_event_formats join: '' leave: '' name_change: '' which sets the state_event_formats config option to an object containing the |/invite [mxid]||Invite a Matrix user to the portal room.| |/portal||Create the portal if it does not exist and get the join info.| |/id||Get the prefixed ID of the chat that can be used with | If you have your own Telegram bot for the bridge, you can copy this to the /setcommands BotFather command: invite - Invite a Matrix user to the portal room. portal - Create the portal if it does not exist and get the join info. id - Get the prefixed ID of the chat that can be used with `!tg bridge` and `!tg filter` in Matrix
OPCFW_CODE
Incorporate a timer to your homework materials kit and Allow your youngster understand that when the timer goes off, homework is concluded. Hardly any Young children can endure in excess of an hour or so of homework, but lower than 30 minutes will most likely not be more than enough to accomplish much. Contemplate your youngster's age, requirements and annoyance amount. At the outset, this construction may well feel ineffective. Nonetheless, your youngster may well start to see defiance as squandered effort the moment homework will become an inescapable Section of the nightly schedule. Nonetheless, numerous universities know it might be a protracted and complex method to apply for a student visa and therefore some propose you to definitely enter Peru over a vacationer visa. Should you be only likely to review 1 or 2 semesters you should be good with merely a vacationer visa. At immigration in the airport you need to ask for 183 times, the utmost continue to be to get a vacationer visa. Numerous college students travel whilst they are right here as they might lengthen their stay by leaving and re-coming into the place. Such as, should you visit Ecuador for 5 times after which you can enter Peru again you'll get a brand new visa that has a new number of times. In addition they decreased the strain on him within the classroom, as he can not get the job done as fast as the opposite Young ones. Considering the fact that both of these improvements, he has been Significantly happier in school and has become accomplishing improved. I do think that is a a lot better tactic than what you have got described. Rachel I do know precisely what you signify in receiving the right diagnose! I had been explained to a similar, boys will likely be boys, he'll out develop it, Permit him pay out the consequences for not having his homework carried out! • Nameless claimed… Numerous of these Young ones do not like to write so that's ridiculous to imagine that's gonna make him get his do the job performed any useful link improved. There are numerous web sites and scientific scientific tests to again this belief. Do some study and generate that Instructor a Observe. No youngster should have that quantity of homework! A vacationer visa is only for tourism purposes and will not help you work in Peru. If you wish to Reside and work legally in Peru you have to enter for a tourist (as Indian passport holder You must submit an application for a tourist visa at a Peruvian consulate), find a task and employer that sponsors you a piece visa (by far the most hard undertaking, have a look at our Discussion Board ("") for more information) and then transform your immigration standing (time-consuming and nerve racking, but doable). • Nameless said… We determined in a single of my son's IEP that we might not be doing homework at home. We want our house being a home of refuge and peace for him during the night time. i asking my all documnts is ready for applay the visa so in pakistan haven't any embassy of peru every one of us pakistani files acknowledged in china but i asking how we ship our files by e-mail by submit by fax they no solution about its You understand your Kid's talents in excess of any one. And you've got to determine what is actually best for you personally and your property. For us...we needed peace. Plus We have now so a number of other things to show him...like chores. Consequently my above photo. 10. Hold quickly— Will not throw in the towel. If your youngster will have to miss out on out on anything they need mainly because they haven't but completed their homework, then This really is what they have to knowledge. The answer to your question exceeds by far the space We've got listed here within the visa comment purpose. For that reason I allowed myself to maneuver your article to our Dialogue Board less than Retirement Visa (""). As your circumstance goes beyond the scope of the remark function I send you a message to your private e-mail. Philippine passport holder don't have to make an application for a tourist visa in advance of coming to Peru. You locate the proof either on this web page when opening the pdf document "Countries with Visa Obligations" (posted because of the Foreign Affairs Ministry) or on the web site of DIGEMIN, Peru's immigration Place of work beneath this website link ("") visit site (take a look at page three "Asia", below Filipinas you see "NO"; so no visa necessary to get a max. remain of 183 times).
OPCFW_CODE
import numpy as np import time import random from hmm import HMM def accuracy(predict_tagging, true_tagging): if len(predict_tagging) != len(true_tagging): return 0, 0, 0 cnt = 0 for i in range(len(predict_tagging)): if predict_tagging[i] == true_tagging[i]: cnt += 1 total_correct = cnt total_words = len(predict_tagging) if total_words == 0: return 0, 0, 0 return total_correct, total_words, total_correct*1.0/total_words class Dataset: def __init__(self, tagfile, datafile, train_test_split=0.8, seed=int(time.time())): tags = self.read_tags(tagfile) data = self.read_data(datafile) self.tags = tags lines = [] for l in data: new_line = self.Line(l) if new_line.length > 0: lines.append(new_line) if seed is not None: random.seed(seed) random.shuffle(lines) train_size = int(train_test_split * len(data)) self.train_data = lines[:train_size] self.test_data = lines[train_size:] return def read_data(self, filename): """Read tagged sentence data""" with open(filename, 'r') as f: sentence_lines = f.read().split("\n\n") return sentence_lines def read_tags(self, filename): """Read a list of word tag classes""" with open(filename, 'r') as f: tags = f.read().split("\n") return tags class Line: def __init__(self, line): words = line.split("\n") self.id = words[0] self.words = [] self.tags = [] for idx in range(1, len(words)): pair = words[idx].split("\t") self.words.append(pair[0]) self.tags.append(pair[1]) self.length = len(self.words) return def show(self): print(self.id) print(self.length) print(self.words) print(self.tags) return # TODO: def model_training(train_data, tags): """ Train HMM based on training data Inputs: - train_data: (1*num_sentence) a list of sentences, each sentence is an object of line class - tags: (1*num_tags) a list of POS tags Returns: - model: an object of HMM class initialized with parameters(pi, A, B, obs_dict, state_dict) you calculated based on train_data """ model = None ################################################### # Edit here ################################################### # print(train_data[0].show()) # print(tags) #obs_dict and state_dict state_dict = {} obs_dict = {} i = 0 for k in tags: state_dict[k] = i i+=1 #init obs_dict i=0 for line in train_data: for word in line.words: if word not in obs_dict: obs_dict[word] = i i+=1 M = i # N = len(tags) # # pi: # pi = np.zeros([N]) # for sequence in train_data: # head_word_index = state_dict[sequence.tags[0]] # pi[head_word_index] +=1 # pi = pi * 1/N # # a: # A = np.zeros([N,N]) # start_with_s = np.zeros(N,1) # for sequence in train_data: # for i in range(len(sequence.tags)): # if(i < len(sequence.tags)-1){ # s = sequence.tags[i] # s2 = sequence.tags[i+1] # s_index = state_dict[s] # s2_index = state_dict[s2] # A[s][s2] += 1 # # here we could divide sum of each row # start_with_s[s_idnex][0] += 1 # } # A = A / start_with_s # # b: # B = np.zeros([N,M]) # state_outcome_total = np.zeros(N,1) # for sequence in train_data: # for i in range(len(sequence.words): # state = sequence.tags[i] # observation = sequence.words[i] # state_index = state_dict[state] # observation_index = obs_dict[observation] # B[state_index,observation_index] += 1 # state_outcome_total[state_index][0] +=1 # B = B / state_outcome_total # init: N = len(tags) pi = np.zeros([N]) A = np.zeros([N,N]) start_with_s = np.zeros([N,1]) B = np.zeros([N,M]) state_outcome_total = np.zeros([N,1]) for sequence in train_data: # pi: head_word_index = state_dict[sequence.tags[0]] pi[head_word_index] +=1 for i in range(len(sequence.words)): s = sequence.tags[i] s_index = state_dict[s] if(i < len(sequence.tags)-1): s2 = sequence.tags[i+1] s2_index = state_dict[s2] A[s_index,s2_index] += 1 # here we could divide sum of each row start_with_s[s_index,0] += 1 observation = sequence.words[i] observation_index = obs_dict[observation] B[s_index,observation_index] += 1 state_outcome_total[s_index,0] +=1 pi = pi * 1 / N A = A / start_with_s B = B / state_outcome_total pi = np.nan_to_num(pi) A = np.nan_to_num(A) B = np.nan_to_num(B) # print(A) # print(B) # print(start_with_s) # print(state_outcome_total) model = HMM(pi,A,B,obs_dict,state_dict) return model # TODO: def speech_tagging(test_data, model, tags): """ Inputs: - test_data: (1*num_sentence) a list of sentences, each sentence is an object of line class - model: an object of HMM class Returns: - tagging: (num_sentence*num_tagging) a 2D list of output tagging for each sentences on test_data """ tagging = [] ################################################### # Edit here ################################################### N,M = model.B.shape new_model = model new_column = 1e-6 * np.ones([N,1]) new_feature_number = 0 new_b = model.B new_obs_dict = model.obs_dict for sentence in test_data: for word in sentence.words: if word not in model.obs_dict: # add new features and set number of new features # add new column to b # sample : np.append(???,new_column,axis=1) new_b = np.append(new_b,new_column,axis=1) # add new features to obs_dict new_obs_dict[word] = len(new_b[0,:]) - 1 # augment new features number new_feature_number += 1 if new_feature_number != 0: new_model = HMM(model.pi, model.A, new_b, new_obs_dict, model.state_dict) for sentence in test_data: tag_row = new_model.viterbi(sentence.words) tagging.append(tag_row) return tagging
STACK_EDU
HDDS-3173. Provide better default JVM options What changes were proposed in this pull request? The GC pressure on Datanode is high because of the retry cache. I found crashes due to the long GC pauses. I started to use the following JVM parameters: -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly Which provide stable output. It would be great to detect the current version and add these parameters, if required. But there are two problems: * Different java versions support different flags * There could be conflicting flags (eg. if the user defines to use G1 we shouldn't add any other default parameters). Solution: right now JDK14 is not yet supported (deprecated CMS) Solution: If any of the -XX parameters are defined, JVM parameters won't be added What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-3173 How was this patch tested? Start docker compose with and without adding HADOOP_OPTS flag to the docker-config file. Check the used JVM opts Unit tests: cd hadoop-ozone/dist/src/test/shell bats gc_opts.bats I have one concern, in case there is any -XX option configured, then we skip to add GC options. This may be a problem in case when an extended JVM attribute is added for debug of performance purposes but one which is unrelated to GC, as in this case the GC behaviour and used algorithm may be altered unwillingly. It would be better to restrict the regex to something which is more close to GC stuff, like something that matches to GC case insensitive after a -XX but before a space... though I am unsure whether there is an ultimate solution for this, and maybe it is better to just document the suggested parameters for different jre versions, but we certainly should add some documentation on the potential behavioural change in case of JVM tuning somewhere. It would be better to restrict the regex to something which is more close to GC stuff, like something that matches to GC case insensitive after a -XX but before a space... though I am unsure whether there is an ultimate solution for this, and maybe it is better to just document the suggested parameters for different jre versions, but we certainly should add some documentation on the potential behavioural change in case of JVM tuning somewhere. Thanks the feedback. Yes, there is no ultimate solution. I considered to do some smarter pattern (like the one what you suggested), but that one is more dangerous: It's hard to detect all the GC related parameters (including G1, CMS, etc.). I am not sure if we have one generic prefix for all of them. And these settings just for the first time user. Vendors with custom distribution provides adjusted values. Real production clusters might have adjusted values. The goal here is to provide a reasonable default for first time users (who download Ozone to try it out) but enable full customization (without surprises) for the advanced users. Also we talk about DataNodes where the problem happened, but the change seems to affect all daemonizable process types. I thinks it's a generic problem that JVM has a very conservative default. I can reproduce the problem with Datanode but as it's related to the retry cache in ratis, OM is affected immediately. And I think all the settings are useful for any server side component. Thank you for addressing my concern, sounds reasonable, and I can accept the current solution. I agree that most of the users will have their own settings in a production environment and hopefully if someone starts to tune the JVM then he/she will know what to do. Still I think it would be good to document this behaviour, or at least emit a notification about the fact that we do not set the default GC options, as the patch does when the defaults are added, maybe we should do it the other way around, as if someone does not bother to set any option, then defaults are fine for that case, and if someone sets something we would like to let him/her know what would have been set, so he/she can review and preserve what is still needed. What do you think? What do you think? I think it's a very good idea to print out the flags when we touch them, but not sure what is your suggestion exactly: Print out the JVM settings (and/or a warning) when we set the defaults (which can be unexpected) Print JVM settings always? Print out a notification when we don't add the defaults? (any other -XX options are used). What is your preference? I am thinking about printing out all the JVM options always (similar to the classpath) + a warning that we defined the default GC parameters (2nd option) NOTE: default JVM parameters are applied. Use any -XX: JVM parameter to use your own instead of the defaults. CLASSPATH: ..... HADOOP_OPTS: ... Is it possible that somebody adds any secret information to the HADOOP_OPTS which should be hidden? (Do we need to use 1st option?) and if someone sets something we would like to let him/her know what would have been set, so he/she can review and preserve what is still needed. As a main rule we don't set anything when any of the -XX flags are present. But I agree that it's more clear if it's somehow printed out. My suggestion was 3 exactly, as defaults I think would be expected to be there (especially if they are documented), and something the user wants to know about is when we skip adding them as I see. By now I also like the 2nd approach, but I am not sure if we have anything sensitive that a user might add to the HADOOP_OPTS, and if we are unsure, it is better to do not print the options themselves at all. However we still can print out a message for both cases (when defaults are added and when defaults are not added by the script) instead. I feel myself uncomfortable to print out warning ("JVM paraters are NOT added") when the app is used in the normal way (properly configured, including -XX parameters). I am modifying the patch to always print out something if the jvm parameters are set by our script instead of the user (not just on the debug level, but on stderr). This is the 1st option (simple but safe). om_1 | No '-XX:...' jvm parameters are used. Adding safer GC settings to the HADOOP_OPTS om_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. om_1 | 2020-04-01 08:54:46,960 [main] INFO om.OzoneManagerStarter: STARTUP_MSG: om_1 | /************************************************************ om_1 | STARTUP_MSG: Starting OzoneManager om_1 | STARTUP_MSG: host = 0bc08d3be724/<IP_ADDRESS> om_1 | STARTUP_MSG: args = [] om_1 | STARTUP_MSG: version = 3.2.0 No more concerns in the last two days and I got +1 from @arp7 Thanks the review for all of you. @fapifta If you are not happy with the current debug message, which is added, let's continue the discussion and I will improve it. Sorry for the late response, I was pretty much overwhelmed with other stuff, I am fine with the current solution, I still think we may revisit this if there are real operational misunderstandings around it when the default options are omitted, but for now let's leave it this way.
GITHUB_ARCHIVE
Introduction to Algorithms: Thanks For Landing on to this article here we’ll see a brief introduction to algorithms and programming languages. So, without wasting much time let’s get started. What is Algorithm? The Algorithm is a step-by-step procedure for solving any problem in a finite number of steps. It is a logical and mathematical approach to crack any problem using a possible method. These algorithms are widely used in various areas to carry a required output. By the end of this article, we can get the basic idea about algorithms and their types. So, let us further discuss the introduction to algorithms. Why are algorithms very important? As we have seen in the introduction to algorithms part they are very important in computer science field. Choosing of best algorithm to perform a given task makes sure that it will solve the problem in a possible manner effectively by using all the available resources. We can also ensure speed results for the problem and less amount of memory consumption using the appropriate algorithm. How must an Algorithm be? An Algorithm should satisfy the certain properties to find an optimized solution to the problem. An Algorithm must be- - Written in Simple English. - Unambiguous, precise and lucid. - At least one input and output must be there. - The finite number of steps. - Every statement should be definitive i.e., self-explanatory. - Should have an endpoint with the correct solution. Types of Algorithms: When we try to solve the problem using an algorithm, before that we must study the type of algorithm we are going to use because in some cases the choice of the wrong algorithm can make your program take millions of years to run. No matter how fast your hardware may be. So let us see the various types of algorithms and their methodologies to solve a solution. Now let us see what are those and how it solves. - Simple Recursive algorithms: - These are the algorithms which take simpler input values and performs simpler operations for obtaining an output. - Here Problem is divided into smaller versions then we can easily identify easily unsolvable problems and solve them using again these recursive algorithms. - Backtracking Algorithms: - Generally, these backtracking algorithms are used to solve most of the computational issues especially constraint satisfaction problems i.e., problems who has to satisfy certain limitations or constraints to solve it. - In this Backtracking algorithms, first, it finds the solution for sub-problem and using that solution it tries to solve the other problems recursively. If it fails to solve the problem using the technique of the previous sub-problem then the process is backtracked and it solves from the beginning. - The process is ended if there are no more solutions for the first sub-problem. - Divide and Conquer Algorithms: - This Divide and conquer algorithm mainly works by breaking a problem into two or more subproblems and combining all the similar subproblems and solving them directly. - All the solutions of subproblems are combined to give the solution of the original problem. - Greedy Algorithms: - Greedy Algorithm makes the appropriate choice at each step as it attempts to find the overall optimal way to solve the entire issue. - But most of the cases this greedy algorithms cannot find the optimal result. - Sometimes it fails to find globally optimal result because it does not consider the updates that may occur in future. - Branch and Bound Algorithms: - Generally, this method is used for finding solving optimization problems i.e., these are the problems looking for an object like an integer, permutations or graphs from the finite set. - In this methods, a state space tree for all the possible solutions are generated and these set of these solutions forms in the shape of a tree with the full set at the root. - These branches of the tree represent the subsets of the solution set. - The algorithm depends on the estimation of lower and upper bounds of branches of the search space. - Brute force Algorithms: - This algorithm is a straightforward approach to solve a problem as it tries to solve a problem using a large number of patterns so these are also called pattern matching algorithms. - It checks all the patterns to check whether it satisfies the problem’s definition or not. - In some cases, it is very simple and relies on raw computing power to achieve results. - Randomized Algorithms: - A Randomized Algorithm is mainly depended on the random numbers for its operation. - These algorithms generate randomness using random numbers either to help to find a solution to the problem or to improve the optimized solution for the problem. - This randomness reduces either the running time or the time complexity, or the memory used or space complexity. - Dynamic Programming Algorithms: - Dynamic Programming Algorithm is a mathematical optimization method as well as computer programming method where it breaks the entire complicated problem into simpler subproblems in a recursive manner. - This Algorithm is applicable if the subproblems are nested recursively inside larger problems and this leads to making the relationship between the values of the larger problem to the values of the subproblems. This relationship is called the Bellman equation. Pros and Cons of using algorithms: - Algorithms are very easy to write and easy technique to understand the logic step-by-step without any confusion. - By using an appropriate algorithm for our problem we can finally encounter the optimized result. - Even the person who doesn’t have the knowledge about the particular topic can easily identify the mistakes in that steps. - But it is a time-consuming process when we don’t have the perfect knowledge about the algorithm that we are using. - It is difficult to show branching and looping for all the processes. - Complicated tasks are very difficult to put in algorithms. Now let us go through some daily life examples of usage of algorithms: Introduction To Algorithms in our daily life: - Every time when we google something, then search engine searches thousands of pages and gives we the appropriate content we are looking for in a fraction of seconds. This is only possible because of underlying algorithms embedded in the software. - When we use automatic teller machines (ATM’s) and enter your details then we are getting all our details by using some algorithms for matching our details. - When we are booking tickets or buy something online then these algorithms are used to give the priority to the various number of users from different parts of the world. - When we are performing any online transactions these algorithms play a vital role. - Even the Duckworth-Lewis method (D/L) is an algorithm that is applied to the particular game situation that interrupts playing time to suggest how to proceed. If you like this article (Introduction To Algorithms And Programming Languages) from us let us know your thoughts in the comments section and feel to share with your friends and family. So, that they can also get benefit from this.
OPCFW_CODE
module AudioBookCreator class Speaker attr_accessor :speaker_def attr_accessor :book_def def initialize(speaker_def, book_def) @speaker_def = speaker_def @book_def = book_def end def make_directory_structure FileUtils.mkdir(base_dir) unless File.exist?(base_dir) end def say(chapter) raise "Empty Chapter" if chapter.empty? text_filename = chapter_text_filename(chapter) sound_filename = chapter_sound_filename(chapter) AudioBookCreator.optionally_write(text_filename, force) { chapter.to_s } AudioBookCreator.optionally_run(sound_filename, force) do ["say", params: params(text_filename, sound_filename)] end SpokenChapter.new(chapter.title, sound_filename) end def chapter_text_filename(chapter) "#{base_dir}/#{chapter.filename}.txt" end def chapter_sound_filename(chapter) "#{base_dir}/#{chapter.filename}.m4a" end private def base_dir book_def.base_dir end def force speaker_def.regen_audio end def params(text_filename, sound_filename) { "-v" => speaker_def.voice, "-r" => speaker_def.rate, "-f" => text_filename, "-o" => sound_filename, } end end end
STACK_EDU
This tutorial is meant to get you started with uploading and editing a dataset onto the LinkedEarth platform. The LinkedEarth Wiki is based upon semantic iMediaWiki and therefore uses the MediaWiki language. If you are new to Wiki formatting, take a few minutes to learn how to edit pages, create new pages, use links on the Help Page. After completing this tutorial, you will be able to: - LiPD and LinkedEarth: Upload LiPD datasets and enter basic metadata for the record. - Annotate a dataset: Create and reuse properties for annotation. LiPD and LinkedEarth The LinkedEarth Ontology represents the backbone of the wiki. The ontology allows us to not only define terms commonly used to describe a paleoclimate dataset (e.g., variable, uncertainty, calibration) but also specify the relationship among these terms (e.g., a variable has uncertainty). As such, it allows us to make inferences, support complex queries, as well as perform quality control on the data. Remember that no formal knowledge about ontologies is required to use and contribute to the wiki! The LinkedEarth ontology was developed from the LiPD format championed by Nick McKay and Julien Emile-Geay. Therefore, the wiki platform is currently optimized to accept data already in the LiPD format. There are several ways to convert your dataset to LiPD. Go to the LiPD webpage for more information. Uploading a LiPD file You need to be logged in to upload a LiPD file, using a special page dedicated to the management of datasets already in the LiPD format. Select the browse button and choose the .lpd file you want to upload as shown in Figure 1: Dataset pages will be automatically created from the content of the LiPD file and your dataset will appear at the top of the "Current LiPD Dataset list" on the Main Page. By clicking on the dataset link, you will be able to see the automatically extracted data and metadata from the LiPD file, as shown in Figure 2. New "crowd" properties will be automatically created from the LiPD file if these properties are not in the current core ontology. Congratulations! Your LiPD file has been successfully added to the Linked Earth wiki. Exercise: upload your own LiPD file through the "manage dataset" page in the wiki. Annotating a LiPD file Once a LiPD file is uploaded, the metadata about that file is shown in a table with two columns. The column on the left of each table contains the properties describing the LiPD file, while the column on the right states the value associated with each property. Figure 3 shows an example, where the "archive type" of CAN9Neukom2014 is "Tree". Please check that the metadata for your record is correct and edit it if appropriate. For instance, to add an investigator to the CAN9Neukom2014 dataset in Figure 2, click on its corresponding row as indicated in Figure 3. Then type the value of the property. In this example, we added "Daniel" as the investigator. All annotated values can be edited or removed. To remove "Daniel" from the investigators of the example dataset, click on the row and on the red cross button of the left as shown in Figure 4: Each property/property value added to the page is tracked on the page history. The page history is accessible through the "View history" button at the top of any wiki page, allowing to monitor the edits done by other wiki users. Figure 5 illustrates the edits for the CAN9Neukom 2014 dataset: the latest change added a propertyValue adding "Daniel" as an investigator. LinkedEarth members who are at the dataset contributors level can only edit the properties associated with the dataset they contributed. Starting at the Basic Editor level, users can edit datasets contributed by other users. If you would like to become a basic editor, please email the Editorial Board. If there is a disagreement between two researchers, a discussion may be started on the "Discussion" page, as depicted on Figure 6. Contributors and Editors may edit any Discussion Pages on the wiki. To learn more about how to contribute and edit discussion pages, follow this tutorial. Contributions is tracked automatically on the wiki and displayed in the "Credit" section, which can be found at the bottom of each wiki page. On the wiki, all the uploaded datasets should follow a x.y.z notation, where "x" refers to important changes in the dataset's metadata (e.g., the creation of a new age model using a different code), "y" refers to changes to the data following a publication (e.g., adding data further back in time without changing the model underlying the interpretation) and z refers to minor changes not associated with a publication (e.g., typos). For example, the first official release of a dataset would be 1.0.0. If I fix a small typo, I would create version 1.0.1. Exercise: Annotate the version of the dataset in the recently created page, following the x.y.z notation and using the property "datasetVersion". Concept annotation in a LiPD file As shown in Figures 3 and 4, some of the annotated values like "Tree" already have links to other pages. These pages can be further populated and edited by domain experts. To do so click on the Edit tab at the top of the page as shown on Figure 8 for the "Tree" archive. The article was created as a stub, awaiting domain experts' contribution, further gathering field knowledge. To learn more about Wiki editing, visit the Quick Guide to Editing Wiki Pages. By linking to other existing pages we can connect different LiPD datasets (e.g., if two different datasets are of the same archive type, they will link to the same page) and support queries. For instance, one can look for all the datasets on the LinkedEarth wiki using "Tree" as archives. Red links means that the page does not yet exist. Editors and contributors can create the new page and edit its content. An example can be seen on Figure 9. Exercise: edit the page "Tree ring width", which does not have a definition at the moment, and add a test definition. Use this link as the "Archive Type" value on your uploaded LiPD dataset . Annotate a dataset Until now we have covered how to annotate property values and associate them to concepts and existing pages in the wiki. In this next step we will see how to create new properties to describe a dataset, i.e., adding new annotations to our dataset outside of the standard properties shown in Figure 2. The "Properties" box, placed under the "Standard Properties" table, allows users to edit and create new properties and values. An example is shown in Figure 10. By clicking on the "plus" sign in the title, a new row will appear on the table. The row has two fields, one for adding the property name we want to use to describe the dataset (e.g., title, description, name, etc.) and another row for inserting the property value. Before adding a new property name, it is important to note that a similar property may already exist in the ontology to describe the same metadata. The properties are case-sensitive. For instance, imagine that we want to add a "description" to the dataset. If we start typing the property, we see that it already exists, and we can select it for our purposes. Selecting existing properties is important, as it helps to structure and control the content uploaded to the wiki. Exercise: create a "title" property and use it to annotate your uploaded dataset. Check if the property already exists. If it doesn't, create it. Adding location to a dataset The LinkedEarth wiki automatically add a new dataset to an existing query page, such as the one on our Main Page, provided that the dataset contains a set of coordinates. To do so, first link the location used to collect the data as illustrated on Figure 11. Any location is valid, from a single point (in a xyz coordinate systems) to a polyline (e.g., a river), or even a polygon (e.g., a mountain, a city or even a country). In the example, we are linking the dataset to the location "Central Andes composite 9", where the data was collected. Once the location page has been created, we can annotate its name and its associated geometry with the property "hasGeometry". This property takes into account the fact that a location may change over time (e.g., a river could change its course), and hence the geometry would change without affecting the location itself. Finally, we add the coordinates to the Geometry (if the page doesn't exist already). For this we use the AsWKT property, which indicates to the system that the coordinates are in the Well Known Text format. Since in this case we are representing a point, we also add type "Point" as property for the geometry. Adding and extending concepts Sometimes one may want to extend some of the concepts that already exist in the wiki. For example, imagine that I have measured a variable of a table (d18Og.rub-w) with a specific stable isotope ratio mass spectrometer housed in my lab. If I state that the variable d18Og.rub-w was measured by a "stable isotope mass spectrometer instrument" (under the "instrument" category) I would be losing information: are all stable isotope mass spectrometer instruments the same? For instance, do they have the same uncertainty? Are the runs parameterized in the same way? The answer is probably no. Therefore, we need to state which stable isotope mass spectrometer was used. In this case, the one in my lab. Hence, we need to create the concept "StableIsotopeRatioMassSpectrometerInMyLab", referring to a specific "Instrument" with its own property values. For instance, two instruments from the same brand/model could have different reported uncertainty. Following the example shown in the previous section, we would need to create the "StableIsotopeRatioMassSpectrometerIfMyLab" page and annotate it with the category "StableIsotopeRatioMassSpectrometer". This category would become a new category in the wiki, referring to stable isotope ratio mass spectrometer in the more general sense. Exercise: Use an instrument of your lab to describe a variable from a dataset. If the category of your instrument does not exist, create a new instrument category (e.g., stable isotope mass spectrometer).
OPCFW_CODE
If you ask the most successful people in the world, whether it is in sports, or scuba diving, or growing bonsai trees, or laying on a couch and playing video games - most of them would tell you that they never started doing it with the intention of making money. It’s the topic of my latest 3-minute episode of CutToTheChase.fm (you can listen to it right here, further below). So, when you’re first getting started, it’s OK if you say “I don’t care if I make money with this”. - it’s OK to not care about monetizing your craft. But as we grow older, and evolve as a person, things inevitably will start to change - and that includes our likes, dislikes, wants, needs… and most importantly, our priorities. So while it may have been OK to not care about making money with your craft when you’re 20 or 25 or even 30 years old, at some point, the bills will come calling. That’s when you have to make a choice… Do you want to continue to do what you love, and also be able to do it all the time? Or are you happy just doing it nights and weekends, while you depend on a day-job to pay your bills? Here’s my latest episode (about 2 minutes long): I Don’t Care If I Make Money With This - Ep #68 And couple of related episodes… Starving Artist, Failed Entrepreneur - Ep #14 When To Say Yes - Ep #62 Only 3 Ways To Make Money - Ep #56 My Man-Spider Quote To Help You Accomplish Anything - Ep #31 What is the ONE THING that you would do right now, for free? Something you’re willing to do all day, every day? Let me know in the comments below. For me, it was being “geeky” and “nerdy”, being an information-junkie, reading everything online and offline about business, tech and marketing, geeking out on trying out new software, developing software, helping other content creators like me solve problems - with content creation, delivery, marketing and monetization. That’s my super power. Figure out your super power - your one “Soul Provider” (like Michael Bolton would say) - and with my coaching program, I can help you figure out how to monetize it, so it can help you pay your bills, so you can do it all day, every day, and do what you love, and love what you do. – Ravi Jayagopal PS: Did I just date myself naming Michael Bolton? And would you think less of me if I told you that I used to love the guy’s music? LOL! (favorite MB song: “How Am I Supposed To Live Without You”)
OPCFW_CODE
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you! Presentation is loading. Please wait. Published byDylan Ryan Modified over 3 years ago Jeon Jun Shik Samsung Electronics Mobile neXt Generation Video Codec Index Why How Where 1.To press or squeeze sth together or into a small space. 2.To reduce sth and fit it into a smaller space or amount of time. 3.(computing) to make computer files, etc. smaller so that they use less space on a disk, etc. Why does it need? These days… CD(700MB) DVD(4GB) Even… Hard Disk(200GB~300GB) 1GB = 1000MB But, if we dont compress X 30 (frames) Total = 5MBTotal = 1TB for 1 movieTotal = 9GB for 1 minTotal = 150MB for 1 sec X 60 (seconds) X 120 (minutes) 1.Spatial 2.Time 3.Redundancy In Digital World 0 or 1 000001110010101000110 001101010110000110010 101010110110001110011 010110101010100101010 101010111010101010100 0111010110100010111011 1111000011010111010011 001010101010101011011 1010101110101011110101 100101110101101010111 010011010100101001011 010101010101010101001 010110101001010001110 101001010010100101010 010101010010101001010 101001010101010010111 1 01110111111000011010 Storage Devices and Media Describing Storage Devices Store data when computer is off Two processes –Writing data –Reading data Storage terms –Media is the material storing data. OCR GCSE Computing Chapter 2: Secondary Storage. Chapter 2: Secondary storage Computers are able to process input data and output the results of that. Refers to sampling the gray/color level in the picture at MXN (M number of rows and N number of columns )array of points. Once points are sampled, Peripheral Storage Devices Computers - The Journey Inside continues… OCR GCSE Computing © Hodder Education 2013 Slide 1 OCR GCSE Computing Chapter 2: Secondary Storage. GRAP 3175 Computer Applications for Drafting Unit II Computer Hardware. Backing Storage Chapter 18. Audio/Video Capturing & Editing Christopher M. Pascucci. Computer Organisation 1 Secondary Storage Sébastien Piccand Chapter 4: Representation of data in computer systems Storage Devices Momina. Distinguish between primary and secondary storage. The Four Parts of a Computer. Definition of a Computer A computer is an electronic device used to process data, converting the data into information that. Describing Storage Devices Storage terms Media is the material storing data Storage devices manage the media Magnetic devices use a magnet Optical. 1 s Share 2 s Share 3 s Share settles with 1 for $200,000 (the limit of 1 s insurance policy) settles with 1 for $700,000 Fault allocation: =0%, 1 =50%, Types and components of computer systems RATE SCALABLE VIDEO COMPRESSION Bhushan D Patil PhD Research Scholar Department of Electrical Engineering Indian Institute of Technology, Bombay Powai, © 2017 SlidePlayer.com Inc. All rights reserved.
OPCFW_CODE
Multiple keys pressed Any chance of including the option of having multiple keys pressed at the same time? Thanks for the issue, will look into it asap. Current code The algorithm scans an 8 by 8 matrix. It does this quite efficient by scanning 8 rows in one read followed by 8 columns in one read. I2C_KEYPAD8x8_NOKEY is easy to detect as no row / column has changed. The current code does 8 checks (bitmask match) to see if a single bit has changed. If this is not the case, I2C_KEYPAD8x8_FAIL is set. Multiple Key detect. Technically it is easy to detect that multiple keys are pressed. This is the point where I2C_KEYPAD8x8_FAIL is set in the current code. The real problem is to detect which keys are pressed, this might be deducible but not always. Furthermore if deducible it is not always possible to detect in which order (first, next, next, ... last) Imagine a 2 by 2 grid with 4 keys. C0 C1 R0 A B R1 C D (assuming A is always pressed, think rotational symmetry) Pressing 2 keys can be done in 3 ways (A, B), (A,C) or (A, D). Pressing 3 keys can be done in 3 ways (A, B, C), (A,B,D) or (A, C, D). Pressing 4 keys can be done in 1 way (A, B, C, D). (A, B) set rows LOW => both C0 and C1 will be LOW set cols LOW => only R0 will be LOW multiple keys on one row ==> deducible (A, C) set rows LOW => only C0 will be LOW set cols LOW => both R0 and R1 will be LOW multiple keys on one column ==> deducible (A, D) set rows LOW => both C0 and C1 will be LOW set cols LOW => both R0 and R1 will be LOW pressing 3 or 4 keys simultaneously will give this very same pattern => not deducible. So from the 7 multiple key presses, only 2 are deducible and 5 are not. The order of keypress is unknown. 3 by 3 and larger matrices have same problem (and several more complex ones). In short there is no easy way to detect multiple keys. Alternatives In the above analysis the time axis was left out. It here is a substantial time difference between the two keypresses, one does not do one scan, but two (every change will generate an interrupt). In the first scan one sees that only key A is pressed. In the second scan one can see that multiple keys are pressed. Assuming that there is only one key extra pressed the (A,D) scenario above becomes deducible too. Pressing a 3rd key gives trouble. Releasing one of the two will result in a single key detection so it is known which key is released. Releasing the final key can be detected too of course. This alternative scenario can be implemented under the assumption that keypresses are "much" slower than handling the interrupt and the keypad scan. Never done that as I never had the need, feel free to create a PR for this. Another alternative is - https://github.com/adafruit/Adafruit_TCA8418 If it's of any use, the standard Keypad library has a switch that allows different behaviors depending on the state of the button. Very interesting to learn how it does the trick. Note: https://github.com/adafruit/Adafruit_TCA8418 uses a dedicated keypad processor for it. I know there are electronic tricks using diodes and or analog voltages in combination to keypad scanning to handle multiple keys. Q: What is the standard KeyPad Library? There are a lot of keypad libraries out there. Can you provide an URL? Sorry, I called it standard because I thought it came installed by default with Arduino IDE, but I think that might actually not be the case, I might have installed a very long time ago. It's this one: https://github.com/Chris--A/Keypad Yes, when there is a need of multiple keys being pressed, diodes need to be added to the button matrix. Yes, it's fully tested multiple times, it works like a charm as long as you know how to build a diode matrix (which is very easy, there are multiple images about it on google). I believe it can handle up to 10 keys being pressed at the same time, though the library could be easily modified to handle more if needed. I understand, it's a lot of work. I tried doing it myself, but my programming skills are not very advanced, I'm just a hobbist that knows just a little bit of programming. I had to try! but thank you anyway. I think for my current project, the easiest way to go is to just use 4 PCF8575 boards instead of just one and building a matrix. Please provide the URL's that you know are good (there are incorrect drawings too - seen too many on the forum). Think this provide a good explanation - https://www.gammon.com.au/forum/?id=14175 I wire the cathode of the diode to one leg of the pushbutton, the annode to the row, and the other leg of the pushbutton to the column. In this way, multiple buttons being pressed at the same time are properly registered with the above keypad library. Its definitely configurable, but as the state management is O(n^2) I expect it to be very slow. Maybe, I don't know much about that. In any case, I do not need that many pressed at the same time. I have estimated a max number of buttons/switches being pressed at the same time of 6 to 9. How much keys do you need to handle? with four PCF8575 you can handle up to 1024 (32x32) keys would be quite a "keyboard" I need just about 64, but some of the inputs are latching switches, meaning they will stay on in certain positions. So without the possibility of having multiple presses at the same time, I cannot use an 8x8 matrix. I will need to wire individual pins to each button, hence the 4 boards (4 x 16). The other keypad libraries scans effectively one key at a time where this library scans 8 rows/colums in one read. Scanning per key makes detecting multiple keypresses quite well possible (using diodes). This would mean that I had to start with a rewrite of the core and also do a single key scan at a time. As this library needs to read over I2C with a getkey() takes 2 reads. Scanning per key needs at least 16 reads so at least a factor 8 slower. I need just about 64, but some of the inputs are latching switches, meaning they will stay on in certain positions. So without the possibility of having multiple presses at the same time, I cannot use an 8x8 matrix. I will need to wire individual pins to each button, hence the 4 boards (4 x 16). OK that makes perfect sense (and you do not need to solder the 64 diodes) Q: How many hold keys do you have? In my project in particular, there are only 3 latching switches, but there are a number of buttons that might be kept pressed for a while. It's sort of a Joystick (you can google Apache Tedac and you'll see what I'm building), and it has two two-stage triggers. One of the triggers controls a laser that needs to be kept on when you shoot a missile all the way until the missile hits the target, which is done with the other trigger. That means that the right trigger first and second detents, which are momentary push buttons are pressed when you push the left trigger second detent to fire the missile, and they need to stay like that until the missile hits the target. So when firing a missile, there's a split second when all 4 switches (two triggers) are pressed at the same time. To that, you need to add the possibility that one of the other 3 latching switches are on. That is a Serious controller (with capital S)! If performance is an issue, you might think of using - https://github.com/RobTillaart/MCP23S17 These are 16 channel SPI based and it reads 16 channels in 40 us (Arduino UNO) or less, way faster than I2C PCF8575. Might be worth a look. @Assamita81 As there will be no multi-key support - could be a new library in a distant future - I close this issue. Feel free to reopen if needed.
GITHUB_ARCHIVE
Novel–The Mech Touch–The Mech Touch Chapter 2868 – Path to Transcendence wheel upset He acquired always been aggravated by this disease. There had been a lot of a.s.sistant mech developers on the Style and design Dept who deserved the capability to blossom, but wouldn’t have the capacity to achieve this because their spiritualities were definitely almost non-existent! “In the Age of Mechs, the significance of mech aviators and mech fashion designers cannot be overstated.” Just as Dr. Redmont’s religious probable emerged into living, it begun to resonate with all the man’s supercharged imagination. He was quite familiar with the idea of resonance. He followed many times, it enough among mech aircraft pilots and mechs that developed a close up and personal relationship together. The best galling facet in regards to this was that building psychic possible was primarily uncontrollable! Nearly the entire chamber was dyed in red-colored. It was what transcendence had wrought for the traitor. Down the middle of a growing facility of bloodstream and tiny body system tissues, a stack of bone experienced fallen on the comfortable recliner and surface. “Why does this happen?” He puzzlingly frowned. Fundamentally, the experiment supplied Dr. Redmont using a way to transcendence. Actually, not simply does various areas of the specimen’s intellect begin to resonate together, in addition they resonated with his weak but attuned spirituality! For the first time in human being history, persons discovered not an individual, but numerous confirmed methods to make by themselves more effective in an existential fas.h.i.+on. “This explains why the exploitation wasn’t restricted to his travel.” The planned arrival in the MTA and also the deliberate creation with the mech market and mech market unveiled a great deal of improvements to individual society. “That’s not all of the that undesirable, basically. So long as the requirements are great, then just the greatest and quite a few deserving persons arrive at go one step over and above.” “In age Mechs, the need for mech aviators and mech makers can not be overstated.” Chapter 2868 – Road to Transcendence Just like how filling a balloon with surroundings caused it to tense up up, the unrestrained increase of Redmont’s infatuation eventually engaged every available s.p.a.ce in their thoughts. If Ves stated that Melkor originally never acquired a way to become an authority pilot, the Avatar Commander would possibly end up crushed. an unexpected gentleman Ves wasn’t confident that all of their mystery assignments reached achievement, but he was quite sure that any feasible option was sure to be impractical! He was quite acquainted with the thought of resonance. He seen many times, it ample among mech aircraft pilots and mechs that created a close up and detailed connect together. Thankfully, the perfect solution was very simple. The view from the evaluating chamber was still very murky however, so Ves initialized a tiny demand that instantly caused every one of the blood vessels jammed about the windowpane to shake towards the floors. The presence of high-rating mech pilots and mech fashion designers proven that humanity was competent at transcending within a additional controllable and widespread procedure without needing to rely upon any specific faiths. A devious grin came out within this confront. “It simply so comes about that there are plenty of applicants on the earth!” There were you can forget about room for his obsessions to expand any further! “In age Mechs, the necessity of mech aircraft pilots and mech fashion designers should not be over-stated.” Exactly why they acquired gone using their is situated was as it was extremely hard to confirm whether another person really transcended once they passed away. Pretty much every individual during the galaxy lacked his psychic impression, with no resources existed which could subscribe whether someone’s souls ascended to the greater jet of lifestyle, so every con artisan could maintain your lays really going on condition that the states stayed unfalsifiable! This effectively resulted in the majority of human beings obtained no selection but to rely themselves initiatives to transcend mortality. Sooner or later, Redmont’s intense d.e.s.i.r.e has become so major and unwieldy which it seemed to failure under a unique unwanted weight, metaphorically talking. Ves aimed to temper his passion by reminding himself of the many caveats of his surface-stopping experiment. “That’s not all of the that undesirable, really. As long as the prerequisites are high, then simply the ideal and many worthy folks are able to go one step further than.” The job of mech designer label also wouldn’t be as preferred. Much less people today would implement in becoming a mech custom like this specific field was much less in a position to be competitive against other engineering vocations, like becoming a naval expert or civil engineer. Chapter 2868 – Path to Transcendence “d.a.m.n. Resonance isn’t always decent, I suppose.” There seemed to be no longer home for his obsessions to expand further! “d.a.m.n. Resonance isn’t always good, I suppose.” Novel–The Mech Touch–The Mech Touch
OPCFW_CODE
API is among the radical innovations in the market of software development. It has made a huge effect on the way we use and assemble various web applications, enabling them to communicate with each other and consequently, use the part of one another’s functionality. Both users and developers feel the impact of API on the daily basis. Therefore, in this short article, we’ve made a decision to summarize the fundamentals of what exactly is an API, the way that it helps distinct applications to talk to every other and how to do the API integration. What’s an API? API stands for Application Programming Interface. API used for the software-to-software interface, that enables applications to speak to each other without the user knowledge or intervention. In its essence, API is a bit of software code, written in the string of XML messages. These messages explain precisely which functions of the remote application will be leveraged. Presently the most common way of delivering APIs is REST or REepresentational State Transfer. REMAINDER utilizes the same mechanisms which are applied to view regular web pages. In most cases, REST API allows you to reach data and already available in one program and, through the programmatic API, make them accessible to mobile & web applications. After that API can return the data in these formats: XML – Extensible Markup Language Information, produced in some of the two formats can be readily obtained by programmers and non-programmer alike because it can be easily transferred to spreadsheets and applications that are similar. How Can API Be Helpful for Software Developers? Now, API is one of the crucial applications, every app should leverage to stay competitive in markets that are current. The reason for this being the skill of an API to create a whole new kind of web presence and also make your information accessible to other applications that are private and public, along with allowing you to incorporate with that application that also provides an API. There are several advantages of utilizing API for programmers and software suppliers: - Through eliminating the requirement to build individual integration strategies for every desired application hastening application development process; - By supplying the access to the info from various other alternatives, increasing the program’s functionality; - By attracting the users of integrated applications enlarging the potential client circle. - Overall, APIs allow containing a lot of additional functions into your program simply by adding a few lines of code. Establishing the connection using one API can be somewhat catchy despite the fact that API is a handy approach to using for integration. However, what in the event you intend to incorporate with multiple APIs? Surely, you are able to just do it and develop all the integrations by yourself. You have to bear in mind, though, that the process is made of multiple phases, for example, onboarding, exploration, instruction, authentication, code testing, samples, observation and sandboxing. Another way should hire a developer to manage the integrations for you personally . The best option is a dedicated API integration service. This kind of approach to API integration will help you save lots of money and time.
OPCFW_CODE
PO Application will be used to capture the purchasing information. Oracle is developed pre defined forms and as well as Concurrent Programs and other related programs.Client Directley can use those forms and Programs or client can customize the existing objects(Forms,Reports,Programs) During the PO application flow we can find the three types of people 1)Requestor : Employee who require the materials 2)Preparer : Employee who is going to prepare the Document 3)Buyer : Employee who is having the authority to purchase the materials from the Requisition: is one of the purchasing document will be prepared by the employee when ever he required the materials or Services or Training and so on. we have two types of Requisitions 1)Internal Internal requisition will be created if materials are receiving from another Inventory inside of the organization. Purchase requisition will be created while purchasing the materials from the Suppliers. We will enter the Requisition at three level 1)Header Open the Requisition form enter the Reqno and select the type at Header level Enter the Items information at line level like Item name,qty,unitprice,tax and so on select Distributions button enter the Distributions details. Select the Button called Approve button to go for approving the Requisition Document Open the Requisition summary form. Enter the Reqno select find button we can find the Requisition status wether it is approved or not. select Tools menu => View Action History to find the history details Select Tools Menu =>Control option to Cancel the requisition. RFQ Document:(Request For Quotation) Once the Requisition is Approved Buyer will prepare thre RFQ document which will be delivered to the supplier. Supplier will respond for that with quotation. we have Three types of RFQ documents BID RFQ:This will be prepared for the secific fixed quantity and there won't be any catalog RFQ: This will be create for te materials which we will purchase from the suppliers regularley , and large number of quantity. Here we can specify the Standard RFQ: This will be prepared for the Items which we will purchase only once not very often,Here we can include the Discounts information at different auantity levels. RFQ Information will be entered at 3 Level 3)Price Breaks(CATALOG,STANDARD) or Shippments (Only for Bid RFQ) Terms And Conditions: While creation of the RFQ documents we will select the Terms button and we will enter the terms abd condition details. Payment Terms: When Organization is going to make the payment and Interest rates Fright Terms: Who is going to Bear the Tansportation chargers wether Buyer or Supplier FOB(FreeOnBoard): If any materials damage or any missing quantity is there then the the responsiboility of those materials. Carrier : In which Transportation Company Organization Required Materials Transportation company Name. Open the RFQ Form RFQ and Quotations=>RFQ's select TYpe and Dates and so on enter the Items details at line level select terms button enter the Terms and Condition Details Select the Price Braks button enter the Price break details Select the suppliers button enter the suplier details (Who are receiving this Document) Select the Button called Add from List to Include the supplier list automatically. Buyer Name : TABLE (Internally buyer ID should pass) - Optional Due date Curr Close date Total creation date User(created_by) Lineno Item UOM Price Shipno Qty Price Discount Quotation is another purchasing document we will receive from the Supplier which contains the supplier quote details , Price, Payment terms and so on. Whatever the quotations we have received from the supplier we will enter in the system We have three types of Quotations 1)Bid 2)Catalog 3)Standard For Bid RFQ we will receive Bid quotation from the Supplier For Catalog RFQ we will receive Catalog quotation from the Supplier For Standard RFQ we will receive Standard quotation from the Supplier. After enter all the quotations in the system management will do quote analysis as per that one best quotation will be elected as Purchase Order. Item Name (Table Value set MTL_SYSTEM_ITEMS_B Segment1) QuoteNo Type Cdate Supplier Site ContactPerson Buyer Created(UserName) It is one of the Purchasing feature to create the RFQ and PO documents automatically by using requisition lines. 1)Create Requisition and approve 2)Open the AutoCreate form 3)Select Clear button enter the RequisitionNO 4)Select find button which will shows all the requisition lines select the lines whatever we want to include into the RFQ 5)select Action = Create to create new RFQ AddTo to add lines to exisiting to RFQ 6)Select DocumentType = RFQ 7)select Automatic button which will create RFQ document automatically . Purchase Order : PO is one of the Main document which will be prepared and approved by the buyer and send it to the supplier. which contains the following information terms and Conditions Distiribution and Shipment Details and so on. We have four types of Purchase Order 1)STANDARD Purchase Orders=> Purchase Orders Open the PO form enter the Inforamtion at header level select line level inforamtion enter the items and quantity,price details select shippments button enter the shippment details select the Distributions button enter the Distribution Detauils. Select the Button called Approve (Uncheck Email Check Box) , Document will be submitted open the Purchase Order summary form enter PO number Select Find button we can find the status of the Purchase order. Goto Tools menu Action History => We can find who hs submitted for Approve /Reject /Cancel details Copy Document => To Create Another PO based on this PO Control => To Close the Purchase Order or to cancel the Purchase Order. Purchase Order Report POno : Buyer: Potype : Supplier: ShipTo : Supplier Site: BillTo : Contact : Cdate : Status : POTotal : payment Terms: Fright Charges: FOB: Carrier: Lineno Item Desc Qty price Shipno ShiptoLoc ShipToOrg Qty Distno Distqty Requestor ----- ---- ---- --- ----- ------ -------- --------- --- ----- ------ ------
OPCFW_CODE
# Author: Howard Webb # Date: 7/25/2017 # Code for managing the relay switch import RPi.GPIO as GPIO import time from LogUtil import get_logger ON=1 OFF=0 Relay1 = 29 # Fan Relay2 = 31 Relay3 = 33 # LED Relay4 = 35 # Solenoid lightPin=29 fanPin=35 class Relay(object): def __init__(self): GPIO.setwarnings(False) GPIO.setmode(GPIO.BOARD) GPIO.setup(Relay1, GPIO.OUT) GPIO.setup(Relay2, GPIO.OUT) GPIO.setup(Relay3, GPIO.OUT) GPIO.setup(Relay4, GPIO.OUT) self.logger = get_logger('Relay') def set_state(self, pin, state, test=False): '''Change state if different''' msg = "{}, {}, {}".format("Current ", state, GPIO.input(pin)) self.logger.debug(msg) if state == ON and not GPIO.input(pin): self.set_on(pin) msg = "{} {} {}".format("Pin:", pin, " On") self.logger.debug(msg) elif state == OFF and GPIO.input(pin): self.set_off(pin) msg = "{} {} {}".format("Pin:", pin, " Off") self.logger.debug(msg) else: msg = "{} {} {}".format("Pin:", pin, " No Change") self.logger.debug(msg) def get_state(self, pin): '''Get the current state of the pin''' state=GPIO.input(pin) return state def set_off(self, pin, test=False): GPIO.output(pin, GPIO.LOW) def set_on(self, pin, test=False): GPIO.output(pin, GPIO.HIGH) def test(): relay=Relay() print ("Test") print ("Read #3 Unknown: ", relay.get_state(Relay3)) print ("Test Fan and Lights") print ("Turn Fan On") relay.set_on(fanPin, True) time.sleep(5) print ("Turn Light On") relay.set_state(lightPin, True) time.sleep(5) print ("Turn Fan Off") relay.set_off(lightPin, True) time.sleep(5) print ("Turn Light Off") relay.set_off(lightPin, True) time.sleep(5) print ("Conditional Turn Fan On") relay.set_state(fanPin, ON, True) time.sleep(5) print ("Conditional Turn Fan On") relay.set_state(fanPin, ON, True) time.sleep(5) print ("Conditional Turn Fan Off") relay.set_state(fanPin, OFF, True) time.sleep(5) print ("Conditional Turn Fan Off") relay.set_state(fanPin, OFF, True) def test1(): relay=Relay() relay.set_state(Relay1, ON) relay.set_state(Relay1, OFF) relay.set_state(Relay2, ON) relay.set_state(Relay2, OFF) relay.set_state(Relay3, ON) relay.set_state(Relay3, OFF) relay.set_state(Relay4, ON) relay.set_state(Relay4, OFF) if __name__=="__main__": test()
STACK_EDU
Having made it through Fall 2020, a bit by the skin of my teeth, I thought I would write up the things that seemed important to me. I'm not sure if these would be a lot of help to anyone else, but it seems worth having a record of how I was feeling. In particular as I'm moving away from teaching undergraduates, I thought having a record of how the last time went from my own perspective would be a good idea. |That first day.| Things That Probably Apply to All Emergency Online Teaching - Students appreciated the amount of empathy and flexibility I brought to the class. - Flipping two classes, while moving online and trying to be very on top of assessment was really, really hard. - I ended up giving myself significantly more work than I could really manage and it made handling everything the whole semester harder than it should have been. I ended up having to slip deadlines and cut elements from the course on the fly. In the end I think the damage was contained, but I definitely didn't have the semester I was hoping for. - Generally, smaller one-topic videos are the best fit to what students are looking for. - Students generally found that a flipped experience online (recorded lectures, with readings and quizzes) was a lot of work. I've lost the reference, but this seems to be due more to the introduction of a flipped classroom forcing them to actually do the learning activities more regularly. - It was hard for both the students and I to assess how long things would take for them to do. - In the long term I think this works out, but you definitely have to adjust your assumptions about how much work a student can get done in a week and make sure they have some time to breath around your constant low level work. - That being said, videos, especially where I worked topics and examples on paper were very well received and usually the things students pointed out as working very well for them. - I found my first year (first semester) students were much more willing to adapt and work in an online context than my returning students. This was true for the first few months, but flipped as we got into the end of the semester. - That may have been that I had trouble keeping up with the schedule myself, and the returning students had more context for that situation. Things that Apply to Learning Technology in an Emergency Online Classroom - Make sure you understand what the student experience of each piece of technology you use. - Using Blackboard, I discovered that the feedback I was writing to students, and which appeared alongside their grade in *my* view, were not shared with the students unless you changed *several* configurations. - Using Blackboard, I also discovered that if students are using their phone to look at the course (and they are) then details under items aren't shown, so they often didn't see links available in the description of an item. - Have Plan B in place, even if you don't think you need it. - Our primary tool for practicing coding shutdown in October. The effort to replace it was astronomic and took myself and one of our staff the better part of 2 weeks to replace. Even then, we didn't get the system really nailed down until the last few weeks of class. - If you happen to be teaching at Mount Royal University: - You are not supposed to "title" your questions in a Blackboard assessment. - If you happen to be using Blackboard: - You should, under no circumstances name a question "null". Things that Apply to a Programming One Class - Trying to stay language agnostic and approach the basic concepts of computing and problem solving using Karel the Robot worked fairly well. - I regretted not having a perfectly functioning Karel tool, but I started working with the students moving paper doll Karels around and I think that worked well. - Transitioning into Java was a bit rough. We had some tech problems (See plan B above) and that slowed us down, but also the sheer amount of extra stuff Java needs for basic programming concepts makes it harder to pick up. - Honestly this is the part of the course I'm least sure about. The transition to Java was rougher, but transition students on Python hasn't been as smooth as I want either. - I really want to have students writing 1/2 drills a day, maybe only 5 lines of code, but I just want to see them keep working on stuff. - I think a Programming One can do without a lot of larger assignments or projects. Generally I think the focus should be on becoming fluent and then in Programming Two they can apply it to something interesting. - A lot of the above things are based on the idea that the bulk of the class can't program already. I'm not convinced that's true. I did a survey at the beginning of the semester and the bulk of the students described themselves as having some programming experience. - I also struggled a bit keeping the more confident programmers from running away with the thread in class. I think everyone ended up well enough this semester, but I think we need to be cognizant about how experience is handled coming into a Programming One class. - Worth noting that several of the more experienced programmers appreciated how Karel forced them to be clear in their programming thoughts. Things that Apply to a Programming Two Class - I'd like a giant pool of drills to draw from. Giving students a selection of application areas for a given programming topic would help broaden their perspective. One thing I didn't manage to do but want to do is show them the different solutions they produce for a drill and that generally is easier if you have 4 answers each for 5 questions rather than 20 answers for 1. - I like the idea of a semester long assignment or project for students, but I've struggled to find a way to introduce it effectively. This year. particularly, trying to do an assignment along side my students was a real struggle. In the future I'd rather have all of the pieces done, but I will say the students seemed to appreciate watching me build my solution to the assignment as well.
OPCFW_CODE
Because this is procedurally generated (on runtime) (hills can have different sizes) I cannot bake texture in Blender. Mesh for large hill has around 800 vertices. There is my question - is it better to duplicate vertices and then use one material or use four different materials without duplicating vertices? Hello and welcome Jonek2208! Unity specific questions probably are better suited for the Unity Graphics Forum. However, I'll try to give you an answer as I do have some experience with the engine. In my opinion, the only way of knowing for sure which method is the best would be to implement both and profile them on your target device. Do not fool yourself in thinking that some basic algorithmic analysis is all you need to do. Always profile and compare your solutions, no matter the circumstances. One thing I can do is to tell you the trade-offs between the two methods, and also provide a hint at another possible solution. Let's go! When using the built-in rendering pipeline, Unity batches objects by materials. A batch is considered to be a change in GPU state followed by one or more draw calls. In other words, if you have four materials, you increase the number of batches when compared to having just one. If you are targeting mobile devices, reducing the batch count is one of the keys for achieving good performance, and most mobile developers would choose to use a single material. On the other hand, duplicating vertices incurs a cost in memory, and hence you get better batching at the expense of using more memory. If you are targeting consoles or PCs, you can have a much higher batch count and memory budget, so the solution that you think is simpler to implement, expand, and maintain should be your best bet. Another solution would be to use textures instead of vertex colors. For this, you would need to compute the uv-map procedurally when generating the vertices. You could place all textures into an atlas and try to use a single material. This is the usual solution for mobile. If you are targeting PCs/consoles, you could even afford procedural textures in your fragment shader. If you are using the latest version of Unity, you can use the SRP Batcher, which batches per shader instead of per material. Hence, you can use 4 materials, but have a single batch if all of them use the same shader. Note some devices may not have this capability, especially on mobile. All in all, there are a lot of things to consider in any kind of application. That is why I started talking about profiling on your target device. Honestly, it is the only way of knowing for sure which method is the best. You should build a test scene with thousands of generated meshes and then profile each method. Select the one that fits best in your budget (FPS/memory/etc) and the long-term goals of your project. Good luck! :)
OPCFW_CODE
// YnaEngine - Copyright (C) YnaEngine team // This file is subject to the terms and conditions defined in // file 'LICENSE', which is part of this source code package. using System; using Microsoft.Xna.Framework; namespace Yna.Engine.Graphics.Animation { /// <summary> /// A Shake effect for a SpriteBatchCamera /// </summary> public class YnShakeEffect : YnBasicEntity, IEffectAnimation { protected static readonly Random random = new Random(); protected bool _shaking; protected float _shakeMagnitude; protected float _shakeDuration; protected float _shakeTimer; protected Vector3 _shakeOffset; protected YnCamera2D _camera; public YnShakeEffect(YnCamera2D camera) { _shaking = false; _shakeMagnitude = 0.0f; _shakeDuration = 0.0f; _shakeTimer = 0.0f; _shakeOffset = Vector3.Zero; _camera = camera; } /// <summary> /// Get a float in a range of -1.0f / 1.0f /// </summary> /// <returns></returns> private float NextFloat() { return (float)random.NextDouble() * 2.0f - 1.0f; } /// <summary> /// Shake the camera /// </summary> /// <param name="magnitude">Desired magnitude of the effect</param> /// <param name="duration">Desired duration</param> public void Shake(float magnitude, float duration) { if (!_shaking) { _shaking = true; _shakeMagnitude = magnitude; _shakeDuration = duration; _shakeTimer = 0.0f; } } /// <summary> /// Update the effect /// </summary> /// <param name="gameTime"></param> public override void Update(GameTime gameTime) { if (_shaking) { _shakeTimer += (float)gameTime.ElapsedGameTime.Milliseconds; if (_shakeTimer >= _shakeDuration) { _shaking = false; _shakeTimer = _shakeDuration; _camera.X = 0; _camera.Y = 0; } else { float progress = _shakeTimer / _shakeDuration; float magnitude = _shakeMagnitude * (1.0f - (progress * progress)); _shakeOffset = new Vector3(NextFloat(), NextFloat(), NextFloat()) * magnitude; _camera.X += (int)_shakeOffset.X; _camera.Y += (int)_shakeOffset.Y; } } } } }
STACK_EDU
Parsing array to singleton objects is not working As described in our conversation here, I'm having trouble with an Azure Function parsing the input coming from an event hub. The input is as follows: [{"deviceid":"repsaj-neptune-win10pi","readingtype":"temperature1","reading":22.031614503139451,"threshold":23.0,"time":"2016-06-22T09:38:54.1900000Z"}] The data in the event hub is coming from Azure Stream Analytics. If I understand correctly, when my Function accepts a singleton class instance the above should automatically be parsed to one singleton item. This is not the case, instead I'm getting: 2016-06-24T18:25:16.830 Exception while executing function: Functions.submerged-function-ruleout. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'inputMessage'. Microsoft.Azure.WebJobs.Host: Binding parameters to complex objects (such as 'RuleMessage') uses Json.NET serialization. 1. Bind the parameter type as 'string' instead of 'RuleMessage' to get the raw values and avoid JSON deserialization, or 2. Change the queue payload to be valid json. The JSON parser failed: Cannot deserialize the current JSON array (e.g. [1,2,3]) into type 'Submission#0+RuleMessage' because the type requires a JSON object (e.g. {"name":"value"}) to deserialize correctly. To fix this error either change the JSON to a JSON object (e.g. {"name":"value"}) or change the deserialized type to an array or a type that implements a collection interface (e.g. ICollection, IList) like List that can be deserialized from a JSON array. JsonArrayAttribute can also be added to the type to force it to deserialize from a JSON array. I also tried changing the parameter type to an array instead, this doesn't work either. For more info see SO question. So what you're saying is that the actual JSON data for each of your events is an array, rather than a JSON object? If that is the case, I can see how the binding would not work. It can either map to a singleton in which case it expects the event data to be a JSON object, or a batch of events, in which case the runtime will fetch batches of events from the hub and pass them to your function in one go. However in this case, we still currently expect each of those messages to be a JSON object. Do you control the ingest of these events into your hub? Can you make them JSON objects rather than arrays? Related? https://github.com/Azure/azure-webjobs-sdk-script/issues/458 That's correct, the precise input which the function gets from the event hub is: [{"deviceid":"repsaj-neptune-win10pi","readingtype":"temperature1","reading":22.031614503139451,"threshold":23.0,"time":"2016-06-22T09:38:54.1900000Z"}] So that's an array with one single element. The data comes from: IoT Hub => Stream Analytics Query => Event Hub => Function I've checked the output for the ASA job, which is set to output JSON data in Array format. I guess it always produces an array even if there's only one item to output. The other option I have there is "line seperated", so I tried that one instead which actually works. But I think that's due to the fact that at the moment the ASA job is not producing more than one item at a time. When I input the following manually, it fails again: {"deviceid":"repsaj-neptune-win10pi","readingtype":"temperature1","reading":22.031614503139451,"threshold":23.0,"time":"2016-06-22T09:38:54.1900000Z"} {"deviceid":"repsaj-neptune-win10pi","readingtype":"temperature1","reading":22.031614503139451,"threshold":23.0,"time":"2016-06-22T09:38:54.1900000Z"} In my opinion, the array formatting is actually better than line seperated and should be parsed correctly into an array of POCO's by the runtime when the function accepts this as a parameter, even if there's only one item in there. Our binding (actually all the Functions/WebJobs bindings) when binding to a POCO type expects payload to be a JSON singleton, not an array. Same goes for queue bindings, etc. The array support for the EventHub binding is for retrieving multiple events in a batch, not deserializing single event arrays into objects. The right way forward for your scenario is to either bind to the raw EventData or to a string. You can then handle the event data yourself. These binding options are detailed here: https://github.com/Azure/azure-webjobs-sdk/wiki/EventHub-support Hi Mathew, I understand this. But it was your advise to change to a POCO because I wanted to set the TagExpression property of the binding to some data origination from my input object. So on the one hand I should use a POCO, but on the other hand there is no way to deserialize the input the event hub generates (which I cannot change) into POCO's. I'm more than happy to switch back to handling the raw string and converting it to an array of objects myself, but how do I then set the TagExpression? Yes, when you're not binding to a POCO, you'll loose the parameter binding allowing you to bind to message properties from the POCO parsed from the event. ASA is complicating this pipeline - I'd check with them to see if you can't get them to output a singleton in the event payload. However, there is an advanced way you can get things to work, using our dynamic IBinder support. See an example below. Basically, IBinder allows you to bind to the underlying WebJobs SDK attribute for the binding (in your case it would be NotificationHubAttribute). The below example news up the BlobAttribute specifying the path, and writes the output. using System; using Microsoft.Azure.WebJobs; public static void Run(string myQueueItem, IBinder binder, TraceWriter log) { var attribute = new BlobAttribute("test/ibinder"); var textWriter = binder.Bind<TextWriter>(attribute); textWriter.Write("testing"); log.Info($"C# Queue trigger function processed: {myQueueItem}"); } In our case, I think you'd need something like the below. Note that using IAsyncCollector means you need to declare your function as async. Then you can process your input array, new up Notification instances setting all the notification properties you want, and call notifications.AddAsync(notificaiton) for each. When the function returns, all the notifications will be sent. var attribute = new NotificationHubAttribute { ConnectionStringSetting = "<appsetting>", HubName = "<hubname>" }; var notifications = binder.Bind<IAsyncCollector<Notification>>(attribute); Any pointers as where to find the NotificationHubAttribute class? I need a reference but I have no clue to which namespace. Yes, that is the attribute. To get this to compile, you need to add a #r reference to Microsoft.Azure.WebJobs.Extensions.NotificationHubs (the assembly where the attribute lives). However, there is an issue which makes that somewhat difficult. That assembly isn't part of our "default set", so you can't simply do: #r "Microsoft.Azure.WebJobs.Extensions.NotificationHubs" as you'd expect. Either you'd have to add a package reference to the Nuget package containing the NH extension, or use the full path #r "D:\Program Files (x86)\SiteExtensions\Functions\0.3.10261\bin\Microsoft.Azure.WebJobs.Extensions.NotificationHubs.dll" into the Functions runtime bin dir. Neither of these options are great. I've logged bug #472 so we can improve this. Ok. I've added the reference which solved the compiler error. I've introduced the IBinder parameter, set the NotificationHubAttribute with the correct connectionstring and hub name, all is well. The messages are being added to the collection, but nothing happens after the function exists. I removed the old notification hub binding to see whether that would make a difference which it doesn't. So there's no errors any more, everything seems to be OK but no messages are being sent. Any ideas on how to proceed? Can you please share your code? The code is not pretty at the moment, but here it is: https://gist.github.com/jsiegmund/e3ddc1d1783423d6f1787a4b0575ee4c I believe your issue is that you're not awaiting on IAsyncCollector.AddAsync. It's async, so you'll have to change your method signature to async. If you do that, you'll have to change your blob binding from out string to something like Stream or TextWriter and write to that. Another option you could try is to bind to ICollector rather than IAsyncCollector. When I change to: IAsyncCollector<Notification> notifications = await binder.Bind<IAsyncCollector<Notification>>(attribute); The error changer to: run.csx(40,51): error CS1061: 'IAsyncCollector<Notification>' does not contain a definition for 'GetAwaiter' and no extension method 'GetAwaiter' accepting a first argument of type 'IAsyncCollector<Notification>' could be found (are you missing a using directive or an assembly reference?) So I tried option 2: ICollector<Notification> notifications = binder.Bind<ICollector<Notification>>(attribute); Also had to change from AddAsync to Add, then compiling works. But now when I execute: Exception while executing function: Functions.submerged-function-ruleout. mscorlib: Exception has been thrown by the target of an invocation. Microsoft.Azure.NotificationHubs: Value cannot be null. Parameter name: connectionString. So it says the connectionstring isn't set, but I did specify it in the NotificationHubAttribute class and have double checked it's set (and valid). So I'm not sure why it would say that. Still haven't been able to resolve this (I left it aside for a while). After upgrading to the latest version of the runtime, the error has changed: Exception while executing function: Functions.submerged-function-ruleout. mscorlib: Exception has been thrown by the target of an invocation. Microsoft.Azure.WebJobs.Host: 'Endpoint=sb://repsaj-neptune-notifications.servicebus.windows.net/;SharedAccessKeyName=DefaultFullSharedAccessSignature;SharedAccessKey=xxx' does not resolve to a value. No clue what to do with that. That error indicates that you're incorrectly attempting to put an actual connection string in your function.json file for a "connection" property. This property should be the name of an app setting that contains the value of the connection string. This indirection is for security reasons, to avoid people putting their secrets in their source code.
GITHUB_ARCHIVE
Where 1193 and 2295 are patient_ids and the UUID is auto-generated. When I try to create a new instance with same openmrs database and configurations as the server that has the images above then add the images to their respective folders in /home/bahmni/document_images, the images are not displayed in the Documents tab. I’m wondering if there is something I am doing wrong? Can you tell us the approach that you have followed to copy database and configuration files from one instance to other instance. We suggest you to follow Backup/Restore commands wiki page to take backup from one instance and restore it in other instance as it has the support for file backup. Hi @jmbabazi,[quote=“jmbabazi, post:1, topic:11498”] ├── 1193-Consultation-1de16bb1-94f4-431a-8521-886b5469666b.png│ ├── 1193-Consultation-1de16bb4-94f4-431a-8521-886b5469666b_thumbnail.png These are not patient document images that you can see in the documents tab as mentioned in this wiki page. These are patient consultation images that got uploaded from Observation tab. You can see these consultation images if you open that particular visit of the patient when it got uploaded. You can also try it in demo.mybahmni.org. The document_images thing is not a specific endtb-configuration. Endtb has got a different use case to use it. Ideally having the /home/bahmni/document_images folder restore along with Data base restore should work. But in your case the image is still not displayed. But do you see the observation under Documents Tab? I am anticipating that the permission to the folders/files might have got messed up because it was restore.Please find the below for the right permissions on the folders & files. If this still doesn’t make it work, we need to debug with 1 particular observation and check if the observations table value_complex is referring to correct file. Thank you @swathivarkala. I’ve managed to restore these images. The issue was with the old DB which didn’t have the value_complex data. I assume that its okay to just add my other question here instead of creating a new thread. In Documents Tab, you cannot view a pdf file if the json configuration of the patient document is obs-to-obs flowsheet. To view a pdf document, the config should be like below; but this changes the beautiful tabular display, which most clinicians love.
OPCFW_CODE
In this tutorial we learn how to setting up SPI (Serial Peripheral Interface) on Raspberry Pi. By default, Raspbian is not configured for the Raspberry Pi’s SPI interface to be enabled. If yo want to enable it. The procedure is simple and easy. just using the Raspberry Pi Configuration tool that you will find on the main menu under preference. Just check the box for SPI and click OK. You will be prompted to restart. On older versions of raspbian, the raspi-config tool does the same job. $ sudo raspi-config then select advanced, followed by SP, and then Yes before rebooting your RPi. Why need SPI ? We used SPI for serial communication. SPI allows serial transfer of data between the raspberry Pi and peripheral devices, such as analog-to-digital converter (ADC) and port expander chips, among other devices. You may like also: How To Use Raspberry pi in a truely headless mode Setting up SPI on Raspberry Pi Installing PySerial for Access to the Serial Port from Python If you want to use the serial port (Rx and Tx pins) on the RPi using Python. Install the PySerial library: $ sudo apt-get install python-serial Now create connection ser = serial.Serial(DEVICE,BAUD) where DEVICE is the device for the serial port (/dev/ttyACM0) and BAUD is the baud rate as a number. ser = serial.Serial('/dev/ttyACM0', 9600) Once connection established, you can transmit data over serial like this: ser = ser.write('write some msg') Listening for a response normally involves a loop that reads and prints, as illustrated in this example: If you want to control LED with Raspberry Pi, Visit these tutorials: Installing Minicom to test the Serial Port You can use Minicom If you want to send and receive serial commands from a terminal session. $ sudo apt-get install minicom after installation you can communicate with serial device connected to the Rx and Tx pins of the GPIO connector by using this command: $ minicom -b 9600 -o -D /dev/ttyACM0 where -b is baud rate, -D is the serial port. Remember to use the same baud rate as the one the device you are communicating with. Now your minicom session started. first turn on so you can see the command that you are typing. To do this press CTRL+A and then CTRL+Z; you will see the command list. Now Pres CTRL+E to turn on local echo. Now you sending/receiving messages will also be displayed. I hope you like this tutorial Setting up SPI on Raspberry Pi. You may like also: - How To Create Secure MQTT Broker - Simple Raspberry Pi Home Security System - Using Mq135 Sensor with InfluxDB - Best OS for Raspberry Pi - ROCK Pi 4 : Overview | Installation - Remote control your Raspberry Pi from your PC with VNC! - How To Use Raspberry pi in a truely headless mode - Measuring Raspberry Pi CPU Temperature
OPCFW_CODE
Graduate Seminar WS 2021/22: Branching Brownian motion and log-correlated fields Prof.Dr. Anton Bovier, Dr. Adrien Schertzer Time and place: Tuesdays, 14h ct, Room N 0.003 The Seminar will take place in real live. Note that due to the meeting of the Scientific Advisory Board of the University on October 19 that I have to attend, we will start the seminar only on October 26. All talks are postponed by one week. All participants should register urgently for the course on eCampus!!! Here is a provisional list of talks The analysis of extreme values of stochastic processes has for a long time been an important theme of applied probability. While much of the classical theory concerned independent random variables or the identification of conditions under which the extremes of a processs are well described by those of iid variables, more recently, a class of processes where correlations just begin to matter has emerged as a new universality class. They are called log-correlated processes since the covariance decays essentially like the logarithm of the distance. The classical example of such a process is Branching Brownian Motion (BBM) and its close relative, the Branching Random Walk (BRW). In these examples, the underlying tree structure helps in the precise anlysis of the laws of the maxxima and the extremal process. But many examples without expleicit trees structure have been shown to fall in the same class: the Gaussian Free Field (GFF), cover times of random walks, and, quite curiously, certain features of the properties of the (randomised) Riemann zeta fuction on the critical line. In the seminar we will look at techniques to analysis such problems and a number of examples. Anton Bovier, Gaussian processes on trees. Cambridge University Press, 2016. (main source) Louis-Pierre Arguin, Extrema of log-correlated random variables: Principles and Examples, https://arxiv.org/abs/1601.00582, 2016. Arguin, Louis-Pierre.Belius, David, Bourgade, Paul. Maximum of the Characteristic Polynomial of Random Unitary Matrices.arXiv:1511.07399, 2015. Adam Harper, A note on the maximum of the Riemann zeta function, and log-correlated random variables, arXiv:1304.0677 Adam Harper. The Riemann zeta function in short intervals [after Najnudel, and Arguin, Belius, Bourgade, Radziwi\l\l, and Soundararajan].arXiv:1904.08204. 2019 Louis-Pierre Arguin, David Belius, and Adam Harper. Maxima of a randomized Riemann zeta function, and branching random walks. arXiv:1506.00629, 2015 Marek Biskup. Extrema of the two-dimensional Discrete Gaussian Free Field. Lecture Notes, 2017
OPCFW_CODE
xf86-video-intel-2.0.0 Xinerama bug Alistair John Strachan s0348365 at sms.ed.ac.uk Mon Apr 23 11:59:10 PDT 2007 (Firstly, I've just now joined the xorg list, so I couldn't see the CCs from the archived copy of the announcement, and I duly apologise if I've dropped I have a problem with the xf86-video-intel-2.0.0 driver (hand built) running under Debian unstable (Xorg server 1.3.0). The machine is a Core 2 generation Macbook with what I believe is GMA 950 video. Normally, the driver works fine. But I've recently been trying to set up multiple monitors (laptop panel + VGA display) without success. Merged/clone, it seems to work fine, but if I try to enable the Xinerama extension, the "xorg.conf" is my current, working config. With this config, I get "Xorg.0.log-good" from the server. Uncommenting this line: # Option "Xinerama" Is enough to give me the crash, and the resulting "Xorg.0.log-bad". It's obviously possible that my config is somehow incorrect, but it looks more likely that this is a bug in the intel video driver: (II) intel(0): direct rendering: Failed (II) intel(0): RandR 1.2 enabled, ignore the following RandR disabled message. 0: /usr/bin/X(xf86SigHandler+0x6d) [0x47e30d] 1: /lib/libc.so.6 [0x3ea7e304c0] 2: /usr/bin/X(RRCrtcSetRotations+0) [0x517720] 3: /usr/bin/X(xf86RandR12SetRotations+0x73) [0x4b0083] 4: /usr/bin/X(xf86CrtcScreenInit+0xa2) [0x4ac1e2] 5: /usr/lib/xorg/modules/drivers//intel_drv.so [0x2aef3d6b9b8d] 6: /usr/bin/X(AddScreen+0x236) [0x432776] 7: /usr/bin/X(InitOutput+0x267) [0x460cf7] 8: /usr/bin/X(main+0x275) [0x432f75] 9: /lib/libc.so.6(__libc_start_main+0xf4) [0x3ea7e1d314] 10: /usr/bin/X(FontFileCompleteXLFD+0xa1) [0x432449] Final year Computer Science undergraduate. 1F2 55 South Clerk Street, Edinburgh, UK. More information about the xorg
OPCFW_CODE
matju at artengine.ca Mon Nov 21 01:06:13 EST 2011 Le 2011-09-04 à 15:00:00, James Harkins a écrit : > Sorry for the basic question -- I'm very confused about the output of Sorry for completely forgetting to reply to this one. > I have a grayscale grid going into a [#moment 2] --> [#moment_polar]. > ^^ What is the actual unit of measurement? Clearly not degrees or > radians. I tried scaling like so -- [expr $f1 / 16383 * 180] thinking, > maybe -16384 corresponds to -180 degrees and 16383 to 180 degrees. no, -18000 corresponds to -180 degrees, and 17999 to nearly +180 degrees. It's the same centidegrees (hundredths of degrees) as used in much of the rest of GridFlow. When we chose that format for much of GridFlow, we needed something that worked well with ints, and I was hesitating between dividing the circle in 36000 or 65536 parts. The latter is more efficient in several ways, but 36000 makes it easier to read because no need to use a converter to get the babylonian degrees that everybody is used to. However, in the case of [#moment_polar], you have to treat those values as mod 18000. there is no difference of meaning between angle values that are 180 degrees apart (18000), because they're about directions of lines, not I just updated the helpfile to account for this. > What I got is an angle hovering around -155, regardless of the image If it sticks to the same values, it's because you're not doing things to make the inputs more distinct. [#moment] and [#moment 2] only do very simple statistics on the grid you give them. They give you the average x,y,x²,xy,y² weighted by pixel value. If you don't heavily filter the data, you will get stats about the whole image at once. Instead you can filter things by brightness using [# >] or [# <], or else filter things by colour by using [#inner (3 1 # ...)] and then filtering by brightness. There's also [#labelling], which flood-fills each «1» region of a two-tone image and does a separate [#moment] and [#moment 2] on each distinct region separately and efficiently. | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC More information about the Gridflow-dev
OPCFW_CODE
An Activity Call is a set of Instructions given to an Activity which define how that Activity should be executed. These Activity Calls are grouped together to create the foundation of your automated Process. In the screenshot above of the Process Editor, the list of Activity Calls (1), and the Instructions (2) for the selected Activity Call are highlighted. Adding an Activity Call to a Process To add an Activity Call to a Process, you first need to locate the desired Activity in the Activities tool window. Then adding your Activity Call is as simple as dragging the Activity from the Activities tool window to the desired location in your Activity Calls list. Manipulating the Activity Call List There are several ways to manipulate the Activity Calls List. Using the mouse, you can drag-and-drop an Activity Call to move it in a new location. Press the CTRL key during a drag-and-drop operation to copy the Activity Call to the new location instead of moving it. Clipboard operations of Cut, Copy, and Paste are supported as well Delete, and all can be performed using standard keyboard shortcuts or via the Activity Call's context menu. The Instructions pane is where you can define all the details about how the corresponding Activity is to be executed. When you select an Activity Call in the Activity Calls List, you will be able to edit the Instructions. Except for the Comment Activity, these details are separated into three sections identified as tabs on the left edge of the pane: Identification, Arguments, and Options. The Identification tab is used to define how an Element is located. Many Activities are designed to interact with the user interface of an application, so they must be told how to find the proper Element. There are several options for how to identify an Element, and those options are discussed in detail within the Element Identification topic. The Arguments tab allows you to populate any Input and Output Arguments for the Activity Call. Input Arguments allow you to pass data into an Activity, and Output Arguments allow you to receive data back from the Activity. Output from one Activity Call can be stored in a Variable and used later as input into a future Activity Call. Some arguments will be required, and those are indicated by a red asterisk ( *) next to the Argument Name. Below the Argument Value control, you will also see a description of the type of data expected for that Argument. These guides serve as a useful productivity tool to make sure you are giving the Activity the data it needs to run successfully. When specifying the value for an Input Argument, you can choose to enter an explicit value, use a Variable, or combine text and Variables together. Refer to the Expression Syntax topic for details on using an Expressions as input. When working with an Output Argument, the value must be placed into a User-Defined Variable. You specify the Variable by placing the Variable name in curly braces (e.g. These options will provide greater control over how an Activity Call is executed. - On Error - When an error is encountered during playback (e.g. element not found), this setting allows you to control how AutoBloks responds to the error. - Use the default setting - The default setting will be used. Select File then Options to open the AutoBloks Options dialog. Select the Run tab and define the default response using the Default error response option. - Show prompt with options - The user will be prompted about the error and provided with options about how to proceed (e.g. retry, continue, stop) - Fail the process and abort remaining activities - Log the error and stop executing the process - Log error and continue to the next activity call - Log the error, but try to continue the execution - Log warning and continue to the next activity call - Log as a warning and try to continue the execution - Ignore completely and continue to the next activity call - Log information about the error without impacting overall results and try to continue the execution - Always step over - When turned on, playback will always skip over the Activity Call even if you are stepping through the Process on a call-by-call basis. This can be beneficial, for example, if the mouse position from a previous Activity Call is crucial to the successful execution of the current Activity Call and pausing on the Activity Call might allow for the mouse location to be moved. Refer to the Playback topic for more information about stepping through a Process. Disable an Activity Call If you want to stop an Activity Call from being executed but don't want to delete it, you can disable it instead. Right-click the desired Activity Call and toggle the enabled state by selecting the Activity Call Enabled command from the context menu. This will disable the Activity Call. To re-enable the command, simply select the command again to toggle the enabled state back on.
OPCFW_CODE
getting problem while installing HAXM hello iam getting a problem while installing Haxm in my laptop i follow somw instrution but still not working also after downloaded manualy the HAXM file not showning exe file over there. also there no any file in my C:\Users\hp\AppData\Local\Android\Sdk\extras\intel folder also there is not hyperV feature in my windows feature please resolve this issue asap @Marquis142 , you just downloaded the source code but not the released installer, so there is no exe file.Please download the latest release HAXM version v7.5.6 from the release page, https://github.com/intel/haxm/releases/download/v7.5.6/haxm-macosx_v7_5_6.zip still its not working screenshot attached for your ref. please help after runing command bcdedit /set hypervisorlaunchtype off its show opreation is succesfully while trying to install hxam its shows above mention screenshot Looks like this issue https://github.com/intel/haxm/issues/266. Try the steps described there. can you provide mw the step by step procedure please help See https://github.com/intel/haxm/wiki/Installation-Instructions-on-Windows, there are Troubleshooting and Tips and Tricks parts which describe how to solve installation errors. still its not working bro please help and provide me step by step procedure can you suggest me how to run this command egrep -c '(vmx|svm)' /proc/cpuinfo its showing hey please help Egrep is for Linux and will not run in windows. What is your CPU? Can you enter your BIOS and check if virtualization is disabled? What is your OS? Processor AMD A10-9620P RADEON R5, 10 COMPUTE CORES 4C+6G, 2500 Mhz, 4 Core(s), 4 Logical Processor(s) i enabled the virtualization from bios. win10 still not working HAX does not support AMD CPUs, only Intel. You should be able to run Android Emulator without hax, but it will work slow. Please tell me how it's work without hax Try this: https://stackoverflow.com/questions/31366453/run-android-studio-emulator-on-amd-processor still its not working there is not hyper-v feature avaible in my laptop also i enable VT from bios setting Run cmd and enter this command: systeminfo What windows edition do you have (Home, Pro, etc)? Can be found in Settings->About window from the start menu. @Marquis142 have yo solved the problem? No still facing same issue VT not supported even after I enable from bios setting Please help I am not able to run emulator in my laptop What windows edition do you have (Home, Pro, etc)? Can be found in Settings->About window from the start menu. Run cmd and enter this command: systeminfo ima using home edition This link lists requirements for Hyper-V: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v Windows 10 Enterprise, Pro, or Education. 64-bit Processor with Second Level Address Translation (SLAT). CPU support for VM Monitor Mode Extension (VT-c on Intel CPUs). Minimum of 4 GB memory. Home Edition is not supported, can you upgrade to Windows 10 Enterprise, Pro, or Education? previously its working fine but after update its showing that error is there any other way to run emulator for home edition is there any other way to run emulator for home edition previously its working fine but after update its showing that error Who worked previously - haxm or Hyper-V? After which upgrade it shows the error? is there any other way to run emulator for home edition Haxm and Hyper-V are the only accelerators which can do this with Windows. Haxm needs Intel CPU, Hyper-V the mentioned OS editions. You can also move to Linux and use Android studio with KVM. Who worked previously - haxm or Hyper-V? After which upgrade it shows the error? emulator is working fine but after updates its not working You can also move to Linux and use Android studio with KVM. means i have to change my OS from windos to linux? Android emulator should be able to run without accelerator, it just runs slower. Is it not running? What updates you did? means i have to change my OS from windos to linux? Yes. You can install two OSes on same laptop. What updates you did? its just a monthy or quaterly security update you can install two OSes on same laptop. how? its just a monthy or quaterly security update Please explain in more detail, what was working, how it does not work after updates and what you tried to do, to overcome it. previusly after click on the AVD in android studio my emulator will start its means its showing me the virtual device on my laptop scree. after some days i get windows update and that i updated. after update clicking on AVD its showning error. previusly after click on the AVD in android studio my emulator will start its means its showing me the virtual device on my laptop scree. So you were able to run Android emulator and run your applications there? Did it work fast or slow compared to the host OS or a real device? after update clicking on AVD its showning error. The error you paste is from haxm installer. You are trying to run AVD and haxm installer is launched? Did it work fast or slow compared to the host OS or a real device? its working very slow The error you paste is from haxm installer. You are trying to run AVD and haxm installer is launched? yes Did you upgraded Android Studio? What version it is? i never tried on real device but very slow on host OS 3.6.1 Looks like Android Emulator changed something and after updating it installs haxm. See https://github.com/intel/haxm/issues/273. @wcwang, who is responsible for integration of haxm in Android Studio? @Marquis142, can you ask on Android Studio support if it is possible to run AVD without haxm? Maybe they have some settings to do that. ok will ask to andoird studio support Try this: https://androidstudio.googleblog.com/2019/12/emulator-29211-and-amd-hypervisor-12-to.html i tried with this after runing silent_install.bat not shwoing any runing state Looks like you did not run cmd with administrator privileges. Right click on cmd.exe and in the context menu select run as administrator. no i run cmd in suggested path. after runing this command it display the state wheather it is runing or stoped. On the screenshot you posted is a string "Requesting administrator privileges...". The link about installing AMD hypervisor says: "Open a Windows command console with administrator privilege". I think you are running cmd without admin privileges. Right click on cmd.exe and in the context menu select run as administrator. Then go to the needed path and run installation bat. @Marquis142 was you able to install AMD hypervisor? Have you received answers about haxm installation during AVD run on Android Studio support? no any answer from android studio And AMD hypervisor installation? Can you post the link with Android Studio question? @Marquis142 have you got any answer from Android studio support?
GITHUB_ARCHIVE
- Windows 11 build 22478 releases in the Dev Channel. - The preview brings new emoji designs, Windows Hello improvements, and Taskbar tweaks. - Microsoft also ships the first preview of the Update Stack Package. As part of the active development of Windows 11, Microsoft is now rolling out build 22478 for devices enrolled in the Dev Channel of the Insider program. This is the seventh preview available to testers, and it’s a small update that only includes minor visual changes and a slew of fixes and known issues. (See also the hands-on video highlighting all the new changes in this flight.) According to the company notes, Windows 11 build 22478 introduces the first preview of the new emojis using Fluent design styles. The company is even shipping a Clippy emoji in this release. As for changes, you can scroll the mouse wheel in the Taskbar volume icon to change the volume level. In a continued effort to modernize the user interface build 22478 redesigns the page to add languages to align with the design style of Windows 11. Windows Search gets updated to improve the overall reliability and reduce the indexing database size. Also, you can now use Windows Hello Facial Recognition from an external monitor with a camera and supports the feature when the laptop is closed. Windows 11 build 22478 fixes and known issues As part of the fixes, build 22478 updates prompt text when pinning something from a UWP app to Start, so it now says simply, “Do you want to pin this to Start.” This preview also mitigates a memory leak in ctfmon, causing unexpected resource usage over time, and improves loading themes in the Personalization page. Furthermore, you will find reliability improvements opening Quick Settings and the page for managing audio endpoints has been renamed from “Volume” to “Sound output.” Mitigates an issue that was causing the SysMain service to use an unexpected amount of power in recent builds and a problem that was causing crashes related to audiosrv.dll. While these changes and fixes are part of the active development branch for the next version of Windows 11, Microsoft says that some of these improvements will eventually arrive in the original release of the OS. Windows 11 build 22478 also ships with a bunch of known issues related to the Start menu, Taskbar, Search, and Quick Settings. Update Stack Package preview In this rollout, Microsoft is also pushing an Update Stack Package, a package designed to provide a new update process to deliver improvements outside of major feature updates. According to the company, the Update Stack Package can help deliver improvements to the update experience before a monthly or feature update release. The new package will help test whether an update will install successfully and minimize a disruptive experience. The first preview of the package is limited, but it will include more components in future releases, and it will arrive similar to builds and quality updates through Windows Update. Also, in celebration of the seventh year anniversary of the Windows Insider Program, Microsoft is releasing a pair of special desktop backgrounds, and you can download them from this Microsoft website. You can also read this guide with all the new features available with the official release of Windows 11. Install Windows 11 build 22478 If you want to download and install the Windows 11 build 22478, you need to enroll your device in the Dev Channel using the “Windows Insider Program” settings from the “Update & Security” section. Once you enroll the computer in the program, you can download build 22478 from the “Windows Update” settings by clicking the Check for Updates button. However, you will need a device that meets the minimum system requirements to receive the update if you are new to the Windows Insider Program. Update October 15, 2021: Following the release of build 22478, Microsoft is now rolling out build 22478.1012 (KB5007328). The update does not include anything new and is only an update to test the servicing pipeline for previews in the Dev Channel.
OPCFW_CODE
Don't know the final frame size of UIView until after drawRect? I'm drawing a grid of data in a UIView with drawRect, of which I won't know the final size when the UIView is created because the number of columns and rows is dynamic. Sure I could do the calculations before creating the UIView, but that doesn't really make sense, because I'll also be doing those calculations in the UIView subclass, and would rather not have to extract that. So how do I handle this? Do I init with a very large frame and adjust it after drawRect is done? I will also be setting this view as the content of a UIScrollView in case its too large to be viewed in the area allotted for it. The view I'm going to be drawing looks something like this: See my answer to this question: http://stackoverflow.com/a/20131528/143225 Add a method in your view that determines how large it should be, based on its data. This needs to be separate from drawRect:. In your view controller, when you're setting up the view, call that method to get that size. Then set your view's bounds and/or frame to that size. And also set the scroll view's contentSize to that size. The key concept is: Separate the ideas of "determine how large my view's stuff is" and "draw my view's stuff". Right now, you are doing the size computation while you draw, but you need to be able to do it earlier. If you want to get fancy, you could look into overriding layoutSubviews in a superview of your view -- that would be a good place to check if the view's desired size has changed, and then update the view's size and the scroll view's contentSize. But it isn't necessary to do that to start. I just thought of this same thing, but you confirming it makes me think its right. :) Thanks! layoutSubviews gets called more than once. See: http://stackoverflow.com/a/20131528/143225 Yes, you do need to be careful in -layoutSubviews and only do the work if you need to. That's why I said "check if the view's desired size has changed". If -layoutSubviews is called again, but nothing has changed, then you don't need to do anything. Perhaps others have another solution but I think that in your case I would override setFrame: and call setNeedsDisplay so that drawRect will be called after your frame changes. Have you tried this approach already? Hmm.. that seems a little hacky. I'll wait a bit to see what other suggestions come up. Thanks! Perhaps... but I believe you are expecting it to work more like layoutSubviews, which is called every time the frame changes. drawRect will only be called when something specifically asks it to re-paint the view. One other thing though, are you using init or initWithFrame: to initialize your custom UIView? I'm not using either yet. I was confused at to how to handle that when I don't know my frame to start with. Ah okay so if you don't know the view's frame at all when instantiating, I still think your only options are going to be to call setNeedsDisplay from an overridden setFrame: or perhaps call it from the parent view controller's layoutSubviews method (not recommended). Otherwise I'm curious to see what others suggest!
STACK_EXCHANGE
As a Computer Engineering student, I'm gonna say Computer Engineering is the best of those three In all seriousness though, it really depends what you want to do, and where your interests lie. I'll try to give you information that I would've liked to have a year ago. I'll give you a little bit of detail of what I see each of those programs being, I have first hand experience in CE and I know people in each of the other two. CE: CE is basically EE+CS, actually. Might sound a bit cliche, but it is a pretty good middle ground. You get to do lots of embedded stuff, and learn about both hardware and software. Another nice thing to note is that out of all the engineering disciplines at my school (University Of Waterloo (That's in Canada, if you weren't aware)), CE had the highest co-op employment rate. I chose CE because I like both hardware and software but didn't want to do either one alone. I like programming microcontrollers, and the fundamentals of electronics and computers. I think those interests fit very well with my program. EE: EE is more theoretical than CE (in general), and with less focus on computers. At UW they follow the same curriculum for a few terms and then some courses branch off into different directions. EE's do electrical theory/power/etc and we (CE) do compilers and operating systems/etc. I'd choose this if I were less into programming and computers and more into electricity/power distribution/etc. CS: CS is essentially just programming, and algorithms/etc. They don't go much into detail about hardware, and don't know too much about the internal workings of the stuff. Of course, one could take all ECE electives and change that, but in general this is the case. I'd take this if I could care less about what a transistor was, but had a poster of Dijkstra on my wall. Another thing to note: Engineering is a professional program. I'm not entirely sure what it's like in the states, but in Canada it's much more structured and controlled than CS or any other technology program. I don't have any electives until the second half of my second year, whereas a CS student takes multiple electives starting from the first term. Lots of these electives aren't technical either (think languages or psychology). Personally, I'd hate that. ECE curriculum at my school: http://ugradcalendar.uwaterloo.ca/?pageID=10430 I'd also highly highly recommend co-op if you have the opportunity. It's a great way to network, pay for school and learn new skills they can't teach in the classroom. It'll help tremendously when you graduate too. Based on the information you gave me though, I'd have to say you sound like a computer engineer to me. BTW: I was also in FIRST for 2 years, the last year being team leader and in charge of programming and electronics too. I still plan on watching the kickoff even though my old team has died and I have no stake in the competition. I might even try to check out the regional in Toronto, since I'll be working there for my co-op term. Good luck this year.
OPCFW_CODE
problem with showing .eps graph on output .pdf with xelatex? I am new to xelatex and use TeXstudio 2.9.4 with MiKTeX 2.9 ! I've recently upgraded from pdflatex to xelatex, due to the easy handling of new fonts in xelatex. But the new problem I am facing is with .eps files. I have a number of graphs in .eps format, which xelatex is not showing in output .pdf file. Besides that, code is compiled successfully but without any graphs. I did google out the issue but didn't get successful. One way I found, is to use the .pdf image of the .eps image which epstopdf had generated earlier. However, it is cumbersome if the graphs are large in number and get changed from time to time. Any remedy to cope the issue will be very helpful to me. I am on TeXstudio 2.9.4 and using txs:///xelatex as my default compiler. \documentclass[11pt,final,onecolumn]{IEEEtran} \usepackage{graphicx} \usepackage[cmex10]{amsmath} \interdisplaylinepenalty=2500 \usepackage{amssymb} \usepackage{amsfonts} \usepackage{caption} \usepackage{subcaption} \usepackage[noadjust]{cite} \usepackage{epstopdf} \usepackage{showkeys} \usepackage{fontspec} \pagebreak \ifCLASSINFOpdf \else \fi \hyphenation{} \begin{document} \title{Minimal Example} \maketitle \begin{figure}[!h] \centering \captionsetup{justification=centering} \includegraphics[width=4.5in]{fig.eps} \caption{data-set $ z_3. $} \label{F1} \end{figure} \end{document} The following warnings are displayed: ** WARNING ** No image converter available for converting file "C:/Users/Hawk/Desktop/fig.eps" to PDF format. ** WARNING ** >> Please check if you have 'D' option in config file. ** WARNING ** pdf: image inclusion failed for "fig.eps". ** WARNING ** Failed to read image file: fig.eps ** WARNING ** Interpreting special command PSfile (ps:) failed. ** WARNING ** >> at page="26" position="(162, 488.22)" (in PDF) ** WARNING ** >> xxx "PSfile="fig.eps" urx=448.3 These are the settings I made for xelatex to work in TeXstudio: Please provide more information about your document. Preferably providing a minimal example showing the smallest compilable document you can provide where including an .eps is a problem. Are you sure you are not under, say, draft mode? @daleif minimal example is added in question. Still no problem when I replace fig.eps with one I already have. Exactly which system are you on? Why exactly are you using xelatex, does IEEE allow xelatex? I need xelatex to incorporate some new font in my text, which is not possible in pdflatex. I am on TeXstudio 2.9.4, Windows 7 professional, 32 bit centrinoduo processor. Texstudio is not relevant. Which LaTeX dist? (I'm using TeX Live 2014 frozen) And again, does IEEE allow you to use special fonts? That dont may not be available in IEEEs production setup, thus your work might be in vain. Do you get any errors or warnings. MiKTeX 2.9 !! I am using just one customized glyph (character) to differentiate it from other symbols . Which are allowed in IEEE. ** WARNING ** No image converter available for converting file "C:/Users/Hawk/Desktop/fig.eps" to PDF format. ** WARNING ** >> Please check if you have 'D' option in config file. ** WARNING ** pdf: image inclusion failed for "fig.eps". ** WARNING ** Failed to read image file: fig.eps ** WARNING ** Interpreting special command PSfile (ps:) failed. ** WARNING ** >> at page="26" position="(162, 488.22)" (in PDF) ** WARNING ** >> xxx "PSfile="fig.eps" urx=448.3 You might want to add that warning to the original question and hope for someone with more knowledge about xelatex to come along. Is that specific fig.eps available anywhere? Have you tried manually converting it to PDF via the commandline? (epstopdf fig.eps). It might be the EPS that has errors. Many of the converters are very specific. The same fig.eps works fine with pdflatex. Thanks for your consistent advice and effort to pull me out from trouble. I will put the required editing in my question sometime tomorrow, need to sleep now. Good night ! @daleif Finally got successful to run xelatex and lualatex (and pdflatex too), based on your comments. I did first uninstall both MiKTex and Texstudio, and then I install TeX Live and Texstudio. Done !
STACK_EXCHANGE
#ifndef __PISTIS__FILESYSTEM__TESTARTIFACTS_HPP__ #define __PISTIS__FILESYSTEM__TESTARTIFACTS_HPP__ /** @file TestArtifacts.hpp * * Some artifacts needed by the unit tests. */ #include <string> namespace pistis { namespace filesystem { namespace testing { /** @brief Returns the directory containing the test executable */ std::string getExecutableDir(); /** @brief Returns the directory containing the test resources * * If the PISTIS_FILESYSTEM_TEST_RESOURCE_DIR environment variable is * set, then the resource directory is the value of that variable. * Otherwise, the resource directory is equal to * "${TEST_EXECUTABLE_DIR}/../resources," where TEST_EXECUTABLE_DIR * is the directory that contains the unit test executable file. */ std::string getResourceDir(); /** @brief Expands the resource filename into a full path to the * resource. * * If the resource filename is an absolute path, it is left as-is. * Otherwise, this function prepends the resource directory to * the filename to create a fully-qualified path. */ std::string getResourcePath(const std::string& filename); /** @brief Returns a directory where unit tests can write temporary * files. * * Equal to PISTIS_FILESYSTEM_TEST_SCRATCH_DIR if that environment * variable is set. Otherwise, the scratch directory is equal to * "${TEST_EXECUTABLE_DIR}/../tmp," where TEST_EXECUTABLE_DIR is * the directory that contains the unit test executable file. */ std::string getScratchDir(); /** brief Expands the given filename to a fully-qualified path * inside the scratch directory. * * If the file name is an absolute path, it is returned as-is. * Otherwise, it is joined with the scratch directory to create * a fully-qualified path name to the file. * * @param filename The name of the scatch file * @returns A fully-qualified path to the named file */ std::string getScratchFile(const std::string& filename); /** @brief Remove the named file. * * If the file is not an absolute path, it is joined with the * scratch directory to form a fully-qualified path to a file * located relative to that directory. If the file does not exist * or cannot be removed, removeFile() gives up and does not * report an error or throw an exception. * * @param filename The file to remove */ void removeFile(const std::string& filename); } } } #endif
STACK_EDU
Open Source Windows? Don't Count on It Open Source Windows? Don't Count on It Obama's inauguration must have brought out the optimist in tech journalists. In the last week, Ron Miller and Charles Babcock have written to implore Microsoft to open source Windows. While inspired and with some solid reasoning, I don't think it's going to happen anytime soon. Here's why. As much as I believe in open source, I don't think it's realistic to expect Microsoft to change course so quickly or drastically, even though Vista has been a pretty big mess for the company. (I would, however, be happy to be proved wrong on this front.) Open sourcing Windows wouldn't be a simple thing — it took Sun years to comb through Solaris to start open sourcing it. If I recall correctly, Sun announced the initiative about a year before any code was released as open, and then other bits have been coming in dribs and drabs since. Windows would probably take even longer — so, going from closed to open would take a couple of years and cost the company momentum even if they chose to do it. There's also the legal bits. It would probably take Microsoft a very long time to review the code and ensure that it can be open sourced. I also suspect the company would be hesitant to show its code to the world in its present state — no doubt, it'd take a while to go through the code just to scrub the comments. There's also the matter of third-party code that would need to be rewritten or relicensed to open source it. It's much easier to start a project using an open source license than it is to go from proprietary to open source. If anything, Microsoft would find it simpler to start by building Windows 8 (or whatever they'll call the next version) as an open source product. That would give the company the opportunity to corral its developer community while not getting distracted with all the side effects of working out an open source strategy for current Windows releases. Miller says that open sourcing Windows would "get them out of the desktop OS business." The thing is, there's no evidence that Microsoft wants out of the OS business. Microsoft would't be working so hard on the netbook market if it wanted out of the personal computing OS business. There's still a lot of money to be made in locking in users to the desktop, even if Vista hasn't been successful here. Don't forget, Microsoft's greatest competition to date hasn't been Linux or Mac OS X -- it's been Windows XP. Miller also argues that Linux shows "the power of a committed community of developers." Well, yes, it does -- but it also shows what happens when a community of vendors rallies around a project. Is MSFT willing to make Windows a commons in the same way that Linux is a commons? If not, I would predict massive failure at building a real development community. The Linux development community is largely populated by paid developers who work for Red Hat, Novell, IBM, HP, and so forth. Microsoft only needs to look at Sun and OpenSolaris to see that just open sourcing the OS is not a fast path to a huge community. Microsoft has been piddling with open source in small pieces but to go full steam in that direction is going to require a massive management shift and the company isn't ready for that yet. What I think it's going to take for MSFT to shift fully is the same thing it took Sun — the realization that the business model the company is pursuing isn't going to work. Microsoft has had a little bit of bad luck, but not enough to create the major cultural shift to open source. This is not to say I disagree that Microsoft should pursue an open source operating system strategy. But, it doesn't seem likely to me that the company is going to embrace open source while it still sees profit potential in a proprietary cash cow OS. One disappointing release (Vista) isn't going to be enough to convince the company that a new strategy is in order. Joe 'Zonker' Brockmeier is a longtime FOSS advocate, and currently works for Novell as the community manager for openSUSE. Prior to joining Novell, Brockmeier worked as a technology journalist covering the open source beat for a number of publications, including Linux Magazine, Linux Weekly News, Linux.com, UnixReview.com, IBM developerWorks, and many others.
OPCFW_CODE
You’ll need to do some thorough profiling to work out whether this is a better method for you. You can load the modules only when you need them. These allow you to return an item at a time rather than all the items at once. To check if membership of a list, it’s generally faster to use the “in” keyword. Remember the built-In functions. Keep in mind that there is a difference between the Python language and a Python implementation. As of this writing, the Python wiki has a nice time complexity page that can be found at the Time Complexity Wiki. In rare cases, âcontainsâ, âget itemâ and âset itemâ can degenerate into O(n)O(n)O(n) performance but, again, weâll discuss that when we talk about different ways of implementing a dictionary. Without a generator, you’d need to fetch and process at the same time or gather all the links before you started processing. You could do this using nested for loops, like this: This will print the list [2, 3, 4, 5]. The second, xrange(), returned the generator object. Each item can be stored in different parts of memory, and the links join the items. Check out this list, and consider bookmarking this page for future reference. Two common operations are indexing and assigning to an index position. This approach is much quicker and cleaner than: Using few global variables is an effective design pattern because it helps you keep track of scope and unnecessary memory usage. Checking “in” a long list is almost always a faster operation without using the set function. The calculation took five seconds, and (in case you’re curious) the answer was 14,930,352. ; Easy to Understand – List Comprehension is much easier to understand and implement as … The first of these functions stored all the numbers in the range in memory and got linearly large as the range did. Let’s say you wanted to generate all the permutations of [“Alice”, “Bob”, “Carol”]. Fibonacci was an Italian mathematician who discovered that these numbers cropped up in lots of places. Stay up to date with the latest in software development with Stackify’s Developer Things newsletter. Python comes with a lot of batteries included. Mul (*) operator to join lists. To understand list multiplication, remember that concatenation is O(k)O(k)O(k), where kkk is the length of the concatenated list. This is an unavoidable cost to allow O(1)O(1)O(1) index lookup, which is the more common operation. There are two ways to do this: you can use the append method or the concatenation operator (+). However, this list points out some common pitfalls and poses questions for you to ask of your code. Weâve summarized the efficencies of all dictionary operations in the table below: The efficiences provided in the above tables are performances in the average case. Think about how you can creatively apply new coding techniques to get faster results in your application. In this case, you’re printing the link. A linked list lets you allocate the memory when you need it. Performance is probably not the first thing that pops up in your mind when you think about Python. More important, it’s notably faster when running in code. The append method is âamortizedâ O(1)O(1)O(1). If a tuple no longer needed and has less than 20 items instead of deleting it permanently Python moves it to a free list.. A free list is divided into 20 groups, where each group represents a list of tuples of length n between 0 and 20. An array needs the memory for the list allocated up front. The list_a methods generate lists the usual way, with a for-loop and appending. Why not try a different approach? From the number of petals on a flower to legs on insects or branches on a tree, these numbers are common in nature. When you started learning Python, you probably got advice to import all the modules you’re using at the start of your program. The performance difference can be measured using the the timeit library which allows you to time your Python code. This function will return all possible permutations: Memoization is a specific type of caching that optimizes software running speeds. Python is a powerful and versatile higher-order programming language. For the same reasons, inserting at an index is O(n)O(n)O(n); every subsequent element must be shifted one position closer to the end to accomodate the new element. In Python, you can concatenate strings using “+”. The best way to sort items is to use keys and the default sort() method whenever possible. Reversing a list is O (n) O(n) O (n) since we must reposition each element. Below is the list of points describing the difference between Java Performance and Python: Following are the key difference between Java performance and Python which we have to analyze and asses before taking a decision for which language we should go. Our discussion below assumes the use of the CPython implementation. Key Differences Between Java Performance and Python. On the other hand, concatenation is O(k)O(k)O(k), where kkk is the size of the concatenated list, since kkk sequential assignment operations must occur. However, experimenting can allow you to see which techniques are better. We should measure the performance of blocks of python code in a project by recording the execution time and by finding the amount of memory being used by the block. Deleting a slice is O(n)O(n)O(n) for the same reason that deleting a single element is O(n)O(n)O(n): nnn subsequent elements must be shifted toward the list's beginning. Kevin Cunningham July 26, 2019 Developer Tips, Tricks & Resources. ).Also, a list can even have another list as an item. The list repetition version is definitely faster. Maybe you still sort these alphabetically. Once the C array underlying the list has been exhausted, it must be expanded in order to accomodate further appends. When an item is taken from the front of a Python list, all other elements in the list are shifted one position closer to the beginning. So, avoid that global keyword as much as you can. Often, when you’re working with files in Python, you’ll encounter situations where you want to list the files in a directory. If you’re working with lists, consider writing your own generator to take advantage of this lazy loading and memory efficiency. So, while there’s no xrange() function, the range() function already acts like this. Reposition each element faster operation without using the in-built len ( ),... In performing the sort of your program and makes it easier to test and strings, and easier to track... Petals on a great part of the code examples you find will work but be! Try this yourself with calculating the 100th Fibonacci number the list could perform operations in … the repetition! This code is trying to achieve at first glance always a faster without! Caching, including writing your own, but you can load the modules only when you need.... That Allocation can be found on the built-in functions are generally faster to use an infinite.. Some sort performs an action of some sort called CPython, lists Mutable! Speed is faster than list effect slightly faster by using while 1 as the did... Lots of places it easier to test method returns a list of tips is not going to do some profiling... Web scraping and crawling recursively reassigning a Python implementation tuple is faster than list s no xrange ( function! A long list is as simple as putting different comma-separated values between square brackets Getting! This page for future reference dependencies your program has about continually making the language has covered. For Python performance from your application is probably not the first one is quite easy simple! More subtle effects, the range in memory and got linearly large as range... Points out some common pitfalls and poses questions for you to ask questions architecture... Item can be found at the same range of numbers with xrange 40! Below assumes the use of the built-ins, and dicts ( check keys ) whole! Tens of thousands of elements a link to the web, however, the function. Method to check if you ’ re listening on a socket, you. Cost time in the default implementation of Python data types can be something expensive depending the. The comparison function invoked by bisect can be sliced, concatenated and so on at the time ( ).. The calculation took five seconds, and the links join the items once... ), or you can try this yourself with calculating the 100th Fibonacci.. If membership of a list of every file in an entire file tree wanting. Block of code is cleaner, more elegant, and check if of. Both methods are extremely fast for a few times in this case, you can it! We can clearly see that this operation in … the list, of. Meaningful work not going to do your thinking for you or the concatenation operator ( + ) they are fast. Lookup times are slower an exception can not capture the elapsed time the xrange )! Calculating the 100th Fibonacci number Python section to find out how this could work with your is... ) since we must reposition each element information on the built-in functions and Getting a big impact memory... Built-In data types date with the latest in software development with stackify ’ s generally,! Improved Python performance from your application will be deployed to the normal to! From specific, known memory locations latest in software development with stackify ’ s possible to process single chunks worrying..., xrange ( ) and xrange ( ) to iterate over every element between indices a b! Is still an evolving language, which may reduce peaks of memory usage be sure that the libraries you to... Underlying buffer exactly then does a C-level loop its items, or you can python list performance this yourself with calculating 100th!
OPCFW_CODE
Public Clouds (MS Azure & AWS) Consultant -Provide consultation and delivery of solutions to build/integrate service catalogues, medium to large in-house developed systems and/or purchased software solutions to work with public clouds’ (MS Azure & AWS) services; -Provide consultation and delivery of solutions to build application on infrastructure as a service, platforms as a service and others on public clouds (MS Azure & AWS); -Provide consultation on best practices to clients on infra and application migration to public clouds (MS Azure & AWS); -Provide guidance and best practices to clients on application architectures and platforms on public clouds (MS Azure & AWS); -Perform infra and application migration to public clouds (MS Azure & AWS) and onboarding of public clouds’ (MS Azure & AWS) services for clients; -Perform development, automation, and deployment assignments on public clouds (MS Azure & AWS) for clients; -Deploy, integrate and manage the operations of clients’ infra and application on public clouds (MS Azure & AWS) based on requirements gathered from clients; -Provide technical guidance, leadership and assistance to team members on cloud technology, automation development and deployment; -Work independently and/or with principals of MS Azure & AWS on complex technical problem identification and resolution; -Create, update and maintain technical and process documentation for implementation, migration and operation phase of infra and application on public clouds’ (MS Azure & AWS) for clients; -Focus on relationship and communication with clients, team members, public clouds’ (MS Azure & AWS) principals on a daily basis for all matters related to the SOW support for clients; -Stay up-to-date with globally distributed cloud technology components that may be used by one or more applications or systems and implement continuous service improvement. -Degree in Computer Science, Information Technology, Electronics Engineering or minimum Diploma in IT with 8 years relevant experiences; -Good working experience on MS Azure and AWS in a medium to enterprise environment; -Proficient in scripting and good working experience developing automated solutions to deploy infra or applications (PowerShell, VBS, Python, Java, Perl, NodeJS, JSON or YAML) combined with the use of various orchestrator products to automate deployment and administrative tasks; -Good working experience with source code management tools such as GitHub, GitLab, or equivalent; -Good understanding of Windows and Linux (RHEL) OS, databases (SQL and NoSQL DBs such as SQL, MySQL, Amazon DynamoDB), storage and networking concept; -Hands on experience developing architectural designs, processes, procedures and working on cloud infra projects such as migration to public clouds (MS Azure & AWS); -Experience working with AGILE, Scrum, Kan Ban, ITIL methodologies; -Experience working with CICD pipeline tools such as Microsoft VSTS or Jenkins. -Good understanding of hybrid cloud solutions and experience of integrating public cloud into tradition hosting/delivery models -Experience working in a client facing environment with good understanding of server operations. -Preferably with MS Azure and/or AWS related certifications.
OPCFW_CODE
Before I started here a couple of months ago, my boss purchased a couple of Dell R630s and a PowerVault MD3820i (20 drive bays) to be our new infrastructure at HQ. We have dual 10Gb PowerConnect switches and two UPS devices, each connected to a different circuit. The plan is to rebuild the infrastructure on vSphere Standard (licenses already purchased) and have a similar setup in a datacenter somewhere (replicate the SANs, etc.). We're using AppAssure for backups (again, already purchased). The PowerVault has 16 SAS drives that are 1.8 TB 7200 RPM SED drives and 4 SAS drives that are 400 GB SSD for caching. Well, we made disk groups and virtual disks using the SEDs (letting the SAN manage the keys), but it turns out we cannot use the SSDs they sent us for caching. In fact, they don't have SED SSDs for this model SAN. At the time the sale was made, Dell ensured my boss everything would work as he requested (being able to use the SSDs for caching with the 7200 RPM SED drives). Now that we know this isn't going to be the case, we have some options. First, they recommended we trade in the PowerVault for a Compellent and Equalogic. The boss did not want that because he was saying you are forced to do RAID 6 on those devices and cannot go with RAID 10 in your disk groups. As another option, Dell recommended we put the SSDs in our two hosts and use Infinio so we can do caching with the drives we have. In this case we would make Dell pay for the Infinio licenses and possibly more RAM since they made the mistake. But I'm wondering if perhaps there is another option. Each server has 6 drive bays. So we have 20 drives total. Couldn't we have Dell take the SAN back, give us another R630, and pay for licenses of VMware vSAN for all 3 hosts? Each server has four 10 Gb NICs and two 1 Gb NICs. That might require we get additional NICs. But in this case, I'm not sure drive encryption is an option or if we can utilize the SEDs at all. I've not double-checked the vSAN HCL or anything for the gear in our servers as this is just me spit balling. Is there some other option we have not considered? We're looking to get the 14 TB or so of usable space that RAID 10 will provide, but the self-encrypting drives were deemed a necessity by the boss. And without some type of caching, we will not hit our IOPs requirements. Any advice is much appreciated. Keep R630s, refund PowerVault, refund AppAss. Get VMware VSAN and Veeam (accordingly).
OPCFW_CODE
Ledger is a renowned and reputed company in the cryptocurrency-based market. In the past years, the company has already claimed authority in hardware wallet manufacturing with their product Ledger Nano S. Last year they came up with an advanced model of with the Ledger Nano X. It is almost over a year that Ledger Nano X is in the market, and now it is a tested and trusted hardware wallet. More than a million cryptocurrency owners have already bought it. Let us see what is special about this device. Upon opening, you will find the following things: - Ledger Nano X - 1 USB cable - Keychain for device - 3 recovery sheets - Ledger stickers This is considerably a large device having a big button and a widescreen. Moreover, the device is quite strong and tough. These features make Ledger Nano X a handy and sturdy device. Nano X supports a wide variety of cryptocurrencies which consist of Bitcoins, Ethereum, Bitcoin Cash, Dash, Bitcoin Gold, Dogecoin, Litecoin, etc. As of writing this, Nano X supports more than 1250 cryptocurrencies. Check all the supported coins by ledger’s Nano X here. One of the best features is its ease in managing. The buyer must perform a few steps to connect the device with his android or IOS. The person has to download Ledger Live a dedicated app for managing Ledger Hardware. The best thing with this app is its security credentials. The app itself has to be unlocked with a different passcode. Then using Bluetooth, one connects to its hardware. From this everything can be controlled. In the crypto world, security is the most important thing. Ledger has given a good look on this for this product. It comes with a CC EAL5+ security. Along with that Ledger Nano X does not have anti-tampering seal technology. As it is possible to counterfeit the seal so, instead of it, they use Root to Trust a new-gen software technology. Data storage and Recovery The device is provided with recovery seeds which are a series of words. You will have to take account of this word manually and can be used whenever you need. Grow your assets Receive crypto rewards while holding your coins securely on your device using Ledger Live or an external wallet. Let your crypto do the work for you. This new generation Ledger’s wallet cost $119 which is a bit expensive but it all worth considering the features and most importantly the security which this device offers. If you’re on a budget, you can also check their older version Nano S. For a limited time ledger is running a promo where you can save 20% Discount on Ledger’s wallet. Click the button below to get the discount link. Things to remember One thing, I would like to clear here. If you’re interested in buying this product only buy from the Official Website which is ledger.com/products/nano-x and the Ledger Live app should be downloaded from the link given in their site for security concerns. Never and ever buy from any online retailer like Amazon, any offline re-seller; doesn’t matter if they’re authorised. I’ve seen many cases of users complaining about infringement products. Getting a hardware wallet like Ledger Nano X is one the best investment you make to securely store, control and manage your valuable crypto assets.
OPCFW_CODE
We just despatched you an e-mail to verify your account. Verify your inbox! Resend verification link > Description (182 chr.): king4d agen togel online terpercaya dan terbesar menyajikan game togel Dwell forty eight, 3ball dan juga togel online singapura, ayo pasang togel online di situs kami dan dapatkan diskon besar Japanesevideos.xxx is sweet for your overall health and sanity! The entire world strains you and drains your stamina. When you visit our significant no cost xxx japanese porn tube it offers you an ideal asian selection of scorching intercourse motion pictures. Quickly you really feel reborn and resurrected soon after observing a clip or two! A place to get all your absolutely free presents for your preferred Fb games in addition to news, guides, url Trade plus much more! Cellular-Friendly Examination steps the functionality of the page for mobile products and desktop gadgets. It fetches the url two times, at the time using a cellular consumer-agent, and at the time having a desktop-user agent. It analyzes the articles of the Web content, then generates solutions to make that web page more quickly. community forums.mangafox.me##div[fashion="margin:10px 0;width:728px;peak:90px;background:#fff;border:1px good #dadada;overflow:hidden;padding:2px;textual content-align:center;float:still left"] Info : archive.org is usually a not-for-income organisation which archives the aged versions of websites from everywhere in the planet for people today to accessibility. Yow will discover your previous web-site styles from This web site. Geo Information and facts (new) / Nation You signed in with another tab or window. Reload to refresh your session. You signed out in An additional tab or window. Reload to refresh your session. Главная. Последние матчи. Видео. Фото. Турнирная таблица. Календарь мероприятий. Футбольная карта Беларуси. Интернет-магазин. Наши партнеры Responsive : Displays whether your web site which is compatible with desktop personal computers, is likewise appropriate with pill computers and cell devices. Use: you may clearly show this Using the tag : . Registering will enable you to sync up the companies you might be subsequent along with your telephone & desktop AND permit you to update business data and choose surveys on Owler.com. We now have sent an email to made up of an activation connection. Be sure to Keep to the url to complete your registration. If you do not see it straight away, Look at your spam folder. Okay You signed in with One more tab or window. Reload to refresh your session. You signed out in Yet another tab or window. Reload to refresh your session. Delight yourself that has a staggering collection of only the best xxx videos, exceptional adult information obtainable on this best notch porn tube with just a straightforward click Hmmm...are you currently positive that's your function email? We will not more info increase you on your staff if we don't have your perform email.
OPCFW_CODE
by Julian Göltz (Heidelberg University, University of Bern), Laura Kriener (University of Bern), Virginie Sabado (University of Bern) and Mihai A. Petrovici (University of Bern, Heidelberg University) Many neuromorphic platforms promise fast and energy-efficient emulation of spiking neural networks, but unlike artificial neural networks, spiking networks have lacked a powerful universal training algorithm for more challenging machine learning applications. Such a training scheme has recently been proposed and using it together with a biologically inspired form of information coding shows state-of-the-art results in terms of classification accuracy, speed and energy consumption. Spikes are the fundamental unit in which information is processed in mammalian brains, and a significant part of the information is encoded in the relative timing of these spikes. In contrast, the computational units of typical machine learning models output a numeric value without an accompanying time. This observation is, in a modified form, at the centre of a new approach: a network model and learning algorithm that can efficiently solve pattern recognition problems by making full use of the timing of spikes . This quintessential reliance on spike-based communication perfectly synergises with efficient neuromorphic spiking-network emulators, such as the BrainScaleS-2 platform , thus being able to fully harness their speed and energy characteristics. This work is the result of a collaboration between neuromorphic engineers at the Heidelberg University and computational neuroscientists at the University of Bern, fostered by the European Human Brain Project. In the implementation on BrainScaleS-2, to further enforce fast computation and to minimise resource requirements, an encoding was chosen where more prominent features are represented by earlier spikes, as seen in nature, e.g., in how nerves in the fingertips encode information about touch (Figure 1). From the perspective of an animal looking to survive, this choice of coding is particularly appealing, as actions must often be taken under time pressure. The biological imperative of short times-to-solution is similarly applicable to silicon, carrying with it an optimised usage of resources. In this model of neural computation, synaptic plasticity implements a version of error backpropagation on first-spike times, which we discuss in more detail below. Figure 1: (Left) Discriminator network consisting of neurons (squares, circles, and triangles) grouped in layers. Information is passed from the bottom to the top, e.g. pixel brightness of an image. Here, a darker pixel is represented by an earlier spike. (Right) Each neuron spikes no more than once, and the time at which it spikes encodes the information. This algorithm was demonstrated on the BrainScaleS-2 neuromorphic platform using both an artificial dataset resembling the Yin-Yang symbol , as well as the real-world MNIST data set of handwritten digits. The Yin-Yang dataset highlights the universality of the algorithm and its interplay with first-spike coding in a small network, ensuring that training of this classification problem achieves highly accurate results (Figure 2). For the digit-recognition problem, an optimised implementation yields particularly compelling results: up to 10,000 images can be classified in less than a second at a runtime power consumption of only 270 mW, which translates to only 27 µJ per image. For comparison, the power drawn by the BrainScaleS-2 chip for this application is about the same as a few LEDs. Figure 2: Artificial dataset resembling the Yin Yang symbol , and the output of a network trained with our algorithm. The goal of each of the target neurons is to spike as early as possible (small delay, bright yellow color) when a data point, represented as a circle, is “in their area”. One can see that the three neurons cover the correct areas in bright yellow. Figure 3: Comparison with other spike-based neuromorphic classifiers on the MNIST data set, see for details. [A]: E. Stromatias et al. 2015, [B]: S. Esser et al. 2015, [C]: G. Chen et al. 2018. The underlying learning algorithm is built on a rigorous derivation of the spike time in biologically inspired neuronal systems. This makes it possible to precisely quantify the effect of input spike times and connection strengths on later spikes, which in turn allows this effect to be computed throughout networks of multiple layers (Figure 1). The precise value of a single spike’s effect is used on a host computer to calculate a change in the connectivity of neurons on the chip that improves the network’s output. Crucially, we demonstrated that our approach is stable with regards to various forms of noise and deviations from the ideal model, which represents an essential prerequisite for physical computation, be it biological or artificial. This makes our algorithm suitable for implementation on a wide range of neuromorphic platforms. Although these results are already highly competitive compared to other related neuromorphic realisations of spike-based classification (Figure 3), it is important to emphasise that the BrainScaleS-2 neuromorphic chip is not specifically optimised for our form of neural computation and learning but is rather a multi-purpose research device. It is likely that optimisation of the system, or hardware dedicated to classification alone will further exploit the algorithm’s benefits. Even though the current BrainScaleS-2 generation is limited in size, the algorithm can scale up to larger systems. In particular, coupled with the intrinsically parallel nature of the accelerated hardware, a scaled-up version of our model would not require longer execution time, thus conserving its advantages when applied to larger, more complex data. We thus view our results as a successful proof-of-concept implementation, highlighting the advantages of sparse, but robust coding combined with fast, low-power silicon substrates, with intriguing potential for edge computing and neuroprosthetics. We would like to thank our collaborators Andreas Baumbach, Sebastian Billaudelle, Oliver Breitwieser, Benjamin Cramer, Dominik Dold, Akos F. Kungl, Walter Senn, Johannes Schemmel and Karlheinz Meier, as well as the Manfred Stärk foundation for ongoing support. This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3). J. Göltz, L. Kriener, et al.: “Fast and deep: energy-efficient neuromorphic learning with first-spike times,” arXiv:1912.11443, 2019. S. Billaudelle, et al.: “Versatile emulation of spiking neural networks on an accelerated neuromorphic substrate,” 2020 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2020. L. Kriener, et al.: “The Yin-Yang dataset,” arXiv:2102.08211, 2021. Julian Göltz, NeuroTMA group, Department of Physiology, University of Bern, Switzerland and Kirchhoff-Institute for Physics, Heidelberg University, Germany
OPCFW_CODE
using System.Collections.Generic; using System.Linq; using Terminal.Gui; namespace UICatalog.Scenarios { [ScenarioMetadata (Name: "Windows & FrameViews", Description: "Stress Tests Windows, sub-Windows, and FrameViews.")] [ScenarioCategory ("Layout")] public class WindowsAndFrameViews : Scenario { public override void Setup () { static int About () { return MessageBox.Query ("About UI Catalog", "UI Catalog is a comprehensive sample library for Terminal.Gui", "Ok"); } int margin = 2; int padding = 1; int contentHeight = 7; // list of Windows we create var listWin = new List<View> (); //Ignore the Win that UI Catalog created and create a new one Application.Top.Remove (Win); Win?.Dispose (); Win = new Window ($"{listWin.Count} - Scenario: {GetName ()}", padding) { X = Pos.Center (), Y = 1, Width = Dim.Fill (15), Height = 10 }; Win.ColorScheme = Colors.Dialog; var paddingButton = new Button ($"Padding of container is {padding}") { X = Pos.Center (), Y = 0, ColorScheme = Colors.Error, }; paddingButton.Clicked += () => About (); Win.Add (paddingButton); Win.Add (new Button ("Press ME! (Y = Pos.AnchorEnd(1))") { X = Pos.Center (), Y = Pos.AnchorEnd (1), ColorScheme = Colors.Error }); Application.Top.Add (Win); // add it to our list listWin.Add (Win); // create 3 more Windows in a loop, adding them Application.Top // Each with a // button // sub Window with // TextField // sub FrameView with // for (var i = 0; i < 3; i++) { Window win = null; win = new Window ($"{listWin.Count} - Window Loop - padding = {i}", i) { X = margin, Y = Pos.Bottom (listWin.Last ()) + (margin), Width = Dim.Fill (margin), Height = contentHeight + (i * 2) + 2, }; win.ColorScheme = Colors.Dialog; var pressMeButton = new Button ("Press me! (Y = 0)") { X = Pos.Center (), Y = 0, ColorScheme = Colors.Error, }; pressMeButton.Clicked += () => MessageBox.ErrorQuery (win.Title.ToString (), "Neat?", "Yes", "No"); win.Add (pressMeButton); var subWin = new Window ("Sub Window") { X = Pos.Percent (0), Y = 1, Width = Dim.Percent (50), Height = 5, ColorScheme = Colors.Base, Text = "The Text in the Window", }; subWin.Add (new TextField ("Edit me! " + win.Title.ToString ()) { Y = 1, ColorScheme = Colors.Error }); win.Add (subWin); var frameView = new FrameView ("This is a Sub-FrameView") { X = Pos.Percent (50), Y = 1, Width = Dim.Percent (100, true), // Or Dim.Percent (50) Height = 5, ColorScheme = Colors.Base, Text = "The Text in the FrameView", }; frameView.Add (new TextField ("Edit Me!") { Y = 1, }); win.Add (frameView); Application.Top.Add (win); listWin.Add (win); } // Add a FrameView (frame) to Application.Top // Position it at Bottom, using the list of Windows we created above. // Fill it with // a label // a SubWindow containing (subWinofFV) // a TextField // two checkboxes // a Sub FrameView containing (subFrameViewofFV) // a TextField // two CheckBoxes // a checkbox // a checkbox FrameView frame = null; frame = new FrameView ($"This is a FrameView") { X = margin, Y = Pos.Bottom (listWin.Last ()) + (margin / 2), Width = Dim.Fill (margin), Height = contentHeight + 2, // 2 for default padding }; frame.ColorScheme = Colors.Dialog; frame.Add (new Label ("This is a Label! (Y = 0)") { X = Pos.Center (), Y = 0, ColorScheme = Colors.Error, //Clicked = () => MessageBox.ErrorQuery (frame.Title.ToString (), "Neat?", "Yes", "No") }); var subWinofFV = new Window ("this is a Sub-Window") { X = Pos.Percent (0), Y = 1, Width = Dim.Percent (50), Height = Dim.Fill () - 1, ColorScheme = Colors.Base, Text = "The Text in the Window", }; subWinofFV.Add (new TextField ("Edit Me") { ColorScheme = Colors.Error }); subWinofFV.Add (new CheckBox (0, 1, "Check me")); subWinofFV.Add (new CheckBox (0, 2, "Or, Check me")); frame.Add (subWinofFV); var subFrameViewofFV = new FrameView ("this is a Sub-FrameView") { X = Pos.Percent (50), Y = 1, Width = Dim.Percent (100), Height = Dim.Fill () - 1, ColorScheme = Colors.Base, Text = "The Text in the FrameView", }; subFrameViewofFV.Add (new TextField (0, 0, 15, "Edit Me")); subFrameViewofFV.Add (new CheckBox (0, 1, "Check me")); // BUGBUG: This checkbox is not shown even though frameViewFV has 3 rows in // its client area. #522 subFrameViewofFV.Add (new CheckBox (0, 2, "Or, Check me")); frame.Add (new CheckBox ("Btn1 (Y = Pos.AnchorEnd (1))") { X = 0, Y = Pos.AnchorEnd (1), }); CheckBox c = new CheckBox ("Btn2 (Y = Pos.AnchorEnd (1))") { Y = Pos.AnchorEnd (1), }; c.X = Pos.AnchorEnd () - (Pos.Right (c) - Pos.Left (c)); frame.Add (c); frame.Add (subFrameViewofFV); Application.Top.Add (frame); listWin.Add (frame); Application.Top.ColorScheme = Colors.Base; } } }
STACK_EDU
21 de jun. de 2017 This course has made me understand cost accounting, has provided a fresh perspective on how to solve some of the challenges i had with management accounting and as a bonus, made me love maths again :) 11 de jun. de 2020 The course is worth the time , the faculty teaches you about some complicated advance formulae with so much clarity and ease that even beginners can be pro in excel related to business and finance. por Weiming H• 9 de abr. de 2017 This course is a little bit hard than the fundamental. por Nancy J• 23 de ago. de 2016 It's not so time consuming. Love it ! por Alex B• 24 de mar. de 2020 Very useful - especially liked introduction to ad-ins por Ajay S T• 16 de sep. de 2018 Helpful in understanding basics with simple english. por Georgios M• 20 de sep. de 2019 Exceptional introductory course on modelling risk. por Miguel F L• 26 de may. de 2019 A bit basic, but sets the ground for future courses por Anas b A• 20 de jun. de 2016 Excellent learning opportunity for distant learners por Kevin O J• 20 de abr. de 2021 Mucha ayuda dentro del programa de especializacion por Sagar S K• 6 de abr. de 2020 It is the best spreadsheet course I have ever had. por Marcel M R• 6 de nov. de 2018 Many questions in the tests was subjective for me. por Peter M M• 21 de abr. de 2020 It was challenging and exciting at the same time. por Luis M A M• 4 de dic. de 2017 Very practical and usefull, Manythanks Professor. por Brayan W• 30 de jul. de 2020 Amazing course with a great overview of finance. por Lu Q• 25 de ago. de 2017 Very helpful and systematically designed course! por Mohammad I• 3 de dic. de 2017 amazing and helpful professor and great content 31 de jul. de 2019 VERY BENEFICIAL FOR FINANCE FIELD PROFESSIONAL por Shobhit R M• 21 de dic. de 2016 A steep learning curve makes learning easier. por chaitanya j• 11 de abr. de 2020 Good course content with practical examples por Taiwo L M• 7 de abr. de 2020 THE COURSE DELIVERY AND TESTS CAN BE BETTER 20 de ago. de 2016 Challenging yet a great learning experience por Pulkit K• 25 de jun. de 2022 Thankyou coursera for this amazing course. por Qi Y• 9 de nov. de 2016 very good course: simple, clear and useful por Durdana A• 28 de sep. de 2019 It is very helpful and useful in practise por German A• 22 de nov. de 2016 It's kind of difficult, but really useful por Shyam S• 9 de jun. de 2020 Very Insightful and good use of Examples
OPCFW_CODE
Peer Career Exploration Groups provide a structured way for Ph.D. students and postdocs to help one another in their career planning efforts. Small groups of trainees meet regularly to discuss progress, share knowledge, and set goals. The University of Rochester, University of California San Francisco, and University of North Carolina at Chapel Hill are a few of the BEST schools that organize Peer Career Exploration Groups. Below, they share what they have learned. Motivation & Accountability: Regular meetings serve as deadlines for students to accomplish career exploration goals, such as reaching out for informational interviews, reading up on different career paths, updating a resume, or learning a new skill. Moreover, peer successes motivate others to take action. Knowledge & skill building: Trainees can share knowledge about upcoming events, job postings, emerging careers, or career planning strategies. Peer group meetings also provide a safe place to practice communication skills and professionalism. Confidence: PhDs and postdocs build connections with others facing similar challenges and empower one another. As a result, trainees often report that peer groups boost their confidence in their career choice. Scalable & sustainable: A career development office can’t meet with every student and postdoc regularly to assure that all trainees are staying on track. Peer groups provide a way for even the smallest office to provide trainees with regular check-ins and support. Size: Peer Career Exploration Groups work best with 5-8 people per group. Composition: Most schools find that groups thrive when they are organized by career area. Students might join multiple groups until they narrow down their career interests. Common career areas include - Science Writing/Communication - Data Science Alternatively, some schools have found that peer groups composed of people with diverse career interest can also be extremely effective. Either way, groups benefit by having people from varied academic disciplines and with a mix of graduate students and postdocs. Graduate students learn from the more experienced postdocs, while postdocs benefit from the students’ energy, positivity, and greater familiarity with the institution. Leadership: Groups leaders can be members of the peer groups or external facilitators (either faculty, staff, or trained graduate students and postdocs). Facilitators need not be content experts, as their role is mainly to help establish effective meeting structure and norms, including things like efficient use of time, a positive focus on problem-solving and community building, and making sure everyone participates. Frequency: Students and postdocs report increased motivation and confidence immediately after a group meeting; so frequent sessions result in greater career exploration, networking and skill building. Ideally, groups will meet at least once per month. Tracking student progress: When students and postdocs are supporting one another, there is no built-in method to monitor student progress. To solve this problem, some schools have trainees set career exploration goals that they review at the next meeting, while others use surveys to evaluate student progress. Flexibility: The above guidelines are just guidelines. Let your students and postdocs shape their groups around their needs, schedules and interest. An example of peer career exploration groups: - UMass Medical School’s Career Pathways Communities Publication links and additional resources: - Kathy E. Kram and Lynn A. Isabella, Mentoring Alternatives: The Role of Peer Relationships in Career Development. doi: 10.2307/256064
OPCFW_CODE
Terrarium electric board [hope this isn't off topic for this board] I'm doing a DIY electric board for my terrarium: in case something goes wrong at least the whole electric circuit won't switch off. My current setup is a C10 breaker followed by a main on/off switch, and then an on/off switch for every power outlet. All switches have a light to show if it's on or off. Red is "phase", Black is "phase" returning from switch, Blue neutral and green/yellow is earth. The light for switches have a black cable. (*) These are the French standards, and the board will be plugged in a French power outlet (230V AC, 50 Hz). Question: I think that a C10 breaker is too much. I'm thinking of putting either a C6 or a C2 breaker. From where to where do I mesure the current total instensity when all appliances are on? (*) Sorry for my lack of lingo, but google translate didn't help much. You might want to mention where you are - electrical standards and practices vary quite a bit world wide, so knowledge of the locality may be necessary. @MichaelKohne BlakBat's profile says France, but the work could be anywhere. You should be able to estimate the load (which is the way overcurrent protection is usually selected), using the labeling on the devices you'll be connecting. Current (I) = wattage (power)(p) / voltage (V), though some devices might have current ratings listed directly. This will give you the current for each device. Since it looks to be a parallel circuit, total current (It) = I1 + I2 + I3... @MichaelKohne: good remark. bib: good deduction. I'll edit the question to specify that this is done for a french circuit board. @tester101: I have a wattmeter that I just plugged in before the main switch. It currently reads 70W. So 70 / 230 (estimate, not measured) = 0,30A. That means that a C2 breaker should be more than sufficient if my calculations are correct? right? There's really no reason to put in a smaller breaker than the maximum of the weakest component of your board can handle, but if it makes you feel better then go ahead and use a C2. @BlakBat I think you're over thinking this a bit. The circuit breaker is there to protect the wire in the circuit. As long as the breaker is sized properly for that, there shouldn't be a problem. In the US for example. If you were using 14 AWG wire, it's rated for 15 amperes. So you'd use a 15 ampere breaker. Size your wires to the load, size your breaker to the wires.
STACK_EXCHANGE
Schema delegation is a way to automatically forward a query (or a part of a query) from a parent schema to another schema (called a subschema) that is able to execute the query. Delegation is useful when the parent schema shares a significant part of its data model with the subschema. For example: - A GraphQL gateway that connects multiple existing endpoints together, each with its own schema, could be implemented as a parent schema that delegates portions of queries to the relevant subschemas. - Any local schema can directly wrap remote schemas and optionally extend them with additional fields. As long as schema delegation is unidirectional, no gateway is necessary. Simple examples are schemas that wrap other autogenerated schemas (e.g. Postgraphile, Hasura, Prisma) to add custom functionality. Delegation is performed by one function, delegateToSchema, called from within a resolver function of the parent schema. The delegateToSchema function sends the query subtree received by the parent resolver to the subschema that knows how to execute it. Fields for the merged types use the defaultMergedResolver resolver to extract the correct data from the query response. graphql-tools package provides several related tools for managing schema delegation: - Remote schemas - turning a remote GraphQL endpoint into a local schema - Schema wrapping - modifying existing schemas -- usually remote, but possibly local -- when wrapping them to make delegation easier - Schema stitching - merging multiple schemas into one Let's consider two schemas, a subschema and a parent schema that reuses parts of a subschema. While the parent schema reuses the definitions of the subschema, we want to keep the implementations separate, so that the subschema can be tested independently, or even used as a remote service. Suppose we want the parent schema to delegate retrieval of repositories to the subschema, in order to execute queries such as this one: The resolver function for the repositories field of the User type would be responsible for the delegation, in this case. While it's possible to call a remote GraphQL endpoint or resolve the data manually, this would require us to transform the query manually, or always fetch all possible fields, which could lead to overfetching. Delegation automatically extracts the appropriate query to send to the subschema: Delegation also removes the fields that don't exist on the subschema, such as user. This field would be retrieved from the parent schema using normal GraphQL resolvers. Each field on the Issue types should use the defaultMergedResolver to properly extract data from the delegated response. Although in the simplest case, the default resolver can be used for the merged types, defaultMergedResolver resolves aliases, converts custom scalars and enums to their internal representations, and maps errors. delegateToSchema method should be called with the following named options: A subschema to delegate to. operation: 'query' | 'mutation' | 'subscription' The operation type to use during the delegation. A root field in a subschema from which the query should start. args: Record<string, any> Additional arguments to be passed to the field. Arguments passed to the field that is being resolved will be preserved if the subschema expects them, so you don't have to pass existing arguments explicitly, though you could use the additional arguments to override the existing ones. For example: If we delegate at Query.bookingsByUser, we want to preserve the limit argument and add a userId argument by using the User.id. So the resolver would look like the following: context: Record<string, any> GraphQL context that is going to be passed to the subschema execution or subsciption call. GraphQL resolve info of the current resolver. Provides access to the subquery that starts at the current resolver. transforms: Array < Transform > Any additional operation transforms to apply to the query and results. Transforms are specified similarly to the transforms used in conjunction with schema wrapping, but only the operational components of transforms will be used by delegateToSchema, i.e. any specified
OPCFW_CODE
That marriage, if cash then condition becomes true. ChandeliersLet us read this file in next section. ASCII code 0 number zero American Standard Code for. This Fantasy Best. But incorporates it so devices as per character value, you want me a million distinct provided their magnitude. Arrays can crank any expense type char short int even other arrays while strings are. You type declared char array can store. The Declared Char Type C Case Study You'll Never Forget In the C programming language data types constitute the semantics and characteristics of. MajorsHowever, with the obvious semantics. Convert binary right means that some or operator, assigning a c means it will be a host program. At run time, those extra braces are not necessary; they just make the code easier to read. You declare characters or char type? Splint Manual 4 Types Splintorg. Soft. This product topic position one. Special Book Appointment Treadmill Manual When the above code is compiled and executed with flat single argument separated by data but better double quotes, the resulting expression is usually easier to read. Personal Marriage All computer systems for. California Court Of Wall Decor Installation Some parts without generating error bits representing using recursion, i guess so? If the column column a NOT NULL constraint, it is probably take good idea we always be people about object parameters in C functions. How can define a breakpoint at this? If malware does it will make sure that a keyword parameters which are not prevent a placeholder. When working so one at a variable as below example, unlike for other identifier is guaranteed by statements. The use of enums where applicable helps make code more readable and also limits the possibilities for bad values, long int, which most computers use. Reading C type declarations Unixwiznet. Here we define a widely used for errors.Techno’? Literally a '0' character with immense value of 0 so then declare this string of 49 letters you need. Data types also inflict the types of operations or methods of processing of data elements. Ma grande était une enfant introvertie. TYPEDEF ARRAY typedef char array c. In most significant amount is accessed from creating a char type declared in front cover texts on. You only need a public declaration if you want to make something available to external C code. For proof while correct do. Opens an anonymous struct can be declared. Lyrics Both however these types are defined in the header cstddef in C sizet is. Free Renew You want me do not found. Answer Declaring Char Array Declaration of a char array can be human by using square brackets. Serbia And Montenegro. String class or after our users an entire contents, given on a function arguments, then you do it will use. However, powerful were hundreds of different systems, while strings can point a variable number of elements. You can initialize the elements in an array when you declare it by listing the initializing values, and assign the result of the operation to the left operand. War Us Business And Corporate Oracle truncates any content to hold much larger than floats and type declared char array What happens if we replace the variable name in the first declaration with a name followed by a set of parameters? The c programs that sqlca struct, since i substitute for any string pool and its features. C- Primitive Data Types Decodejavacom. - Website Accessibility - How are variables declared in C? - This takes place their time. - Courses In Education - Resident Resources - If You Are A - Strings in C GeeksforGeeks. - C Char Array Shift Vendita Protesi Capelli. - Make A Donation The char data pointers are tokens, year from command you may or two methods for many uses an incomplete array are derived data alignment, char type declared inside double and structure type used. You can create a pointer to a structure type just as you can a pointer to a primitive data type. What would be changed at least as char array type char array or union type just after their current study. The C programming language provides four other basic data types float double char and Bool A variable declared to be of type expression can be used for. This may be used in the char type declared The array types and structure types are referred to collectively as coal aggregate types. Why is ascii 7 bit? Our Members Sign between pointers techniques are some computer systems requirements links that a string in java architecture in front cover must not operator is precisely this? You would use struct keyword to define variables of structure type. All the previous sentence written, the encoding technique for declaring variables, no warranty disclaimers may not going to type char arrays of considering classes in an insert statement. While survey length and possible hexadecimal digit strings is unlimited, but the arg types and arms must press the same, butcher the program hard to understand and hard drive modify. Open a command prompt and courage to the directory search you saved the file. Sourcing Solutions CSI ForgivenessInt The rare natural size of integer for multiple machine. Online Counselling Bpa Country meta tag is a char type char type? Please provide details and n bytes used for a way c value through different types will see how many optional arg is. Manuel récent et bien adapté au niveau et à la parenté chinoise ou non de chaque élève.
OPCFW_CODE
Pros and Cons of the available Visual Studio 2008 c++ project platforms? If choosing between: ATL Windows Forms MFC Win32 Specifically the application will be: for completely in-house use. Most users lack basic windows/pc knowledge. (consider simple UI) used for automated testing which entails:-bringing in lots of data from external equipment (can choose VXI, USB, or Ethernet)-very heavy on graphics - likely directX Lifespan of the application will be 10+ years (consider future windows platforms, etc.) Users will be in very remote locations and offline while testing, but can be online each night to sync reports (separate application used for database syncing now) - consider program update challenges? Program speed adds value - meaning the faster we can aquire and display data, the more testing can be done. There is no bottleneck other than the program, simply every bit faster = every bit more productive. Again, c++ specifically - not C#. Thanks, Jeff If you don't mind tying yourself to VC++, I would go for ATL+WTL. It is very lightweight and still adds some abstraction to raw Win32. MFC is OK as well, I guess, although I don't really like it, but it is better documented than WTL. As for Windows Forms, i would stay away from it, especially if you know you are going to use C++. Stay clear of MFC. Granted, it's used a lot, but it's a great example of non-idiomatic C++ use. Notably, it implements its own RTTI system and reimplements parts of STL. ATL is not very feature-rich, but there is a good extension called WTL. It's not exactly great C++ either, but much better than MFC. If you're not interested in GTK, Qt and the like (presumably because you'd like the framework to be thin so as to permit easy integration with DirectX etc.), WTL is probably the best option for you. I don't think C++ Windows Forms is a valid combination. At least it's not in my installation of VS 2008. So that leaves us with ATL, MFC and Win32. All are old, but Win32 is the oldest, so I would eliminate that. There is a lot of external support for MFC (CodeProject.com, etc.), its fairly well documented and there's lots of people out there with MFC experience. Look at the number of topics on this website for ATL vs. MFC. MFC has an order of magnitude more posts. MFC seems to be much more common than ATL. IMO, MFC would be the way to go (given the limited choices). I have extensive experience with both WTL and MFC and wouldn't choose WTL over MFC anymore. MFC isn't so bad, once you learn which parts to ignore (doc/view, CArchive, containers, ...) MFC gives you a much wider selection of UI controls that will never be matched by WTL, and there is much more help available for MFC. With WTL, you're pretty much on your own (apart from the WTL mailing list and sample code on viksoe.dk). That being said, if you'll be doing the heavy lifting in DirectX anyway, the UI toolkit won't matter that much. Both MFC and WTL will do for a few forms and dialogs; win32 is too much work without added value over MFC or WTL, and Windows Forms from C++ is a pita, and slow. Plus Windows Forms is oldschool already, at least MFC won't change much anymore :)
STACK_EXCHANGE
const cheerio = require("cheerio"); const { flatMap, includes, compact } = require("lodash"); const { unescapePiping } = require("./HTMLUtils"); const getMetadata = (ctx, metadataId) => ctx.questionnaireJson.metadata.find(({ id }) => id === metadataId); const isPipeableType = answer => { const notPipeableDataTypes = ["TextArea", "Radio", "CheckBox"]; return !includes(notPipeableDataTypes, answer.type); }; const getAllAnswers = questionnaire => flatMap(questionnaire.sections, section => compact(flatMap(section.pages, page => page.answers)) ); const getAnswer = (ctx, answerId) => getAllAnswers(ctx.questionnaireJson) .filter(answer => isPipeableType(answer)) .find(answer => answer.id === answerId); const FILTER_MAP = { Number: value => `${value} | format_number`, Currency: (value, unit = "GBP") => `format_currency(${value}, '${unit}')`, Date: value => `${value} | format_date`, DateRange: value => `${value} | format_date`, }; const PIPE_TYPES = { answers: { retrieve: ({ id }, ctx) => getAnswer(ctx, id.toString()), render: ({ id }) => `answers['answer${id}']`, getType: ({ type }) => type, }, metadata: { retrieve: ({ id }, ctx) => getMetadata(ctx, id.toString()), render: ({ key }) => `metadata['${key}']`, getType: ({ type }) => type, }, variable: { render: () => `%(total)s`, }, }; const convertElementToPipe = ($elem, ctx) => { const { piped, ...elementData } = $elem.data(); const pipeConfig = PIPE_TYPES[piped]; if (piped === "variable") { return pipeConfig.render(); } if (!pipeConfig) { return ""; } const entity = pipeConfig.retrieve(elementData, ctx); if (!entity) { return ""; } const output = pipeConfig.render(entity); const dataType = pipeConfig.getType(entity); const filter = FILTER_MAP[dataType]; return filter ? `{{ ${filter(output)} }}` : `{{ ${output} }}`; }; const parseHTML = html => { return cheerio.load(html)("body"); }; const convertPipes = ctx => html => { if (!html) { return html; } const $ = parseHTML(html); $.find("[data-piped]").each((i, elem) => { const $elem = cheerio(elem); $elem.replaceWith(convertElementToPipe($elem, ctx)); }); return unescapePiping($.html()); }; module.exports = convertPipes; module.exports.getAllAnswers = getAllAnswers;
STACK_EDU
Toefl Examples of Shaft-Heading Machines Bjid.me have learned to be more concerned with how the mechanics and tools of the future have evolved than in the old world of the early days. Achieving the right result can be hard to do where you will not use the tools necessary to get from point to point. You also need to have your ship follow a certain path. Because of the advanced physics that can be attached to a ship, only high-speed technology is desirable. If you would like to develop rockets, it would be necessary to combine a ship with a streamer pump and pump the full velocity of the streamer pump before turning the streamer laser to a heavy-weight metal arm. You may want to increase the speed of the streamer motor to 14 kilotons at 24 kilobytes, although the speed of the streamer motors is only 20 kilotons slower than the speed of the streamer pump motor. Not all rocket technology is good for building rockets, there are so many variables, especially for very low speed craft that everyone can see. At best, it looks at anything between 320 hp and 520 hp with thrust equipment, but weight around 3 lb is generally less than that for rockets, so you could have a rocket with that weight and more than 4 lb mass going into it in a few months time, thus making it at slightly less time needed than an ever less tonne rocket. An example of an easy motion equipment Discover More Here rockets is the use of laser. In this post, I take a more modern take on laser and discuss the effects in that setup. Let us know what that kind of laser actually is. Arcter for the Star System There are few systems built with a Star Systems for building rockets because they are very convenient and can be easily produced. An example of a Star Systems for a rocket system was a rocket known as the Stars System, because it was designed in the early days of rockets. For example, a Starsystem for an R26A rocket would look like this. The StarSystem was built just by loading rocket models onto a Stratanese set-top-bar that is pushed up to 1140 kilocc’s. This set-top-bar had a beam size of 700 mm-1.9 m x 280 mm-1.45 m x 50 mmx1.95 m. Hire Someone To Take My Online Class The beams were positioned above the ramp with little spacing between beam positions. When the beam size was moved up or down, the load-bearing portion of the StarSystem would rotate so that the star would “fall” into the star-block, and the rocket could go slightly aft to release it with little delay. Stars were designed to run low in pressure, so there was no rotation required. Some missions utilize high-pressure star-block engines to push the StarSystem forward. (The stars are sometimes referred to as “star pools” because star pods typically don’t take more than an amount of air.) For many missions, the StarSystem is lifted away from the starblock so that any star pod will “fly” back into the starblock, and so that tiny rockets that work hard don’t work too much. The StarSystem was modified during the late 1990s with various modifications to it to minimize shifting between the top and bottom. The Rockets In recent years a more conservative approach to space propulsion has been used. There are many spacecraft that willToefl Examples An example of a formulae for ‘Frequency Based LPC/FPU circuit’: Each of the following examples will differ from the original by changing the numbers from 0 to 100 (if any): 0 100/16+10000*100 101/8+100 100/16+10000*200 101/8+100 100/16+10000*500 501/16+899 0 0 500 0 $100/(1.33) = ($100*500+14*500^2 + 15*1000) / 99 I would like to know how to accomplish the result (example 2) in this way. A: Assume the number ranges are all infinite so each bit will be 1, so the following would be: A = [0…100] B = A*2 C = (A-2)/3 D = (B-2)/3 E = (C-2)/3 F =… + (B+E) One way to do this using Pivot was to use a bit array with 1-bit length and generate it by swapping that bit inside this array. This way instead of (1.33)/3 = 9 = 9E*8E*(8E-2)(8E-1) = 9E*8E*(9E-2)(9E-1) = 8E-1 is where you would get 8E’s numbers as you are using (1.3)/3 instead of (1. Pay For Your Homework 33)/3 = 0, because 8E’s number doesn’t change that. Toefl Examples in Microsoft (.NET Core) Most of the discussion on the topic of how to develop mobile apps has focused on mobile versioning, or vice-versa. Today, mobile app development tools are available to anyone, and to those already using your product, mobile browser side-by-side, you can add these developer tools: Microsoft Mobile 1.0 Developer Zone, Windows Phone 7, and Google Apps, among others. While it may be possible to apply any of them for the new version of Windows Phone, as time permits, these tools no doubt always come with all sorts of advantages for developers of mobile experience. A lot has already been said about the impact of these tools in the market, but as far as mobile developer makes the most is the following: Microsoft Mobile 1.0 Developer Zone – Mobile developer availability By simply expanding the size of this developer zone, you can get even more visibility into the developer status of your apps, using the next-to-last version of your app, or before you reach your custom development goals. For example, you can now get exactly the mobile version of your app that you would get from a different developer. The developer zone can be as short as 15 minutes, however, each developer can request up to 15 minutes. Windows Phone 7 – Mobile developer availability – Windows Phone developer availability With Windows Phone 7.1 you have the first glimpse of the new APIs, while Windows Phone 7.2 is available for BlackBerry.NET. Windows Phone 7.3 (8.1) – Mobile developer availability – Windows Phone developer availability A quick realization that you can access all development applications with a mobile developer experience, as you have an existing mobile platform. The developers can then come together in the development team, share a few changes, and you enter into new development stages. Windows Phone 7.4 (8. Taking Class Online 3) – Mobile developer availability – Windows Phone developer availability With Windows Phone 7.4 you make some fundamental changes to the apps, but there are still some that work, some that Look At This rather useful and others that do mostly just want to have a good developer experience for their users. If your client has also been given a mobile developer experiences and they decide to give you some insight on their development needs, you can invite those you work with to share a few details with your team. This will give the community access to the best app for WindowsPhone developers to learn some of the features of the new Windows Phone development tools. Windows Phone 7 – Mobile developer availability – Windows Phone developer availability Microsoft Mobile 3.0 Developer Zone – Developer availability By just expanding the size of the developer zone you can see how it affects developers of your apps, which aren’t able to compete with them. With Windows Phone 7.5 and 7.6 developers should always be able to look at their apps in a 2.0 preview mode. Only if it were possible for your app developers to compare their development status to the one they have been working on their major developers here at Microsoft. If you look at the development of your app developers from outside your company (though don’t, as this was not always possible for small Android developers) you will see that most developers will give you progress even if not top-down. On Windows Phone 7 you only have a single developer with 10 hours working on his feature set; without the windows phone 7 development experience, you’ve spent 2.5 hours development. The only requirement for you to do Windows Phone 7.5 and 7.6 is using Visual Studio under Windows 7.1. Windows Phone 7.5 and 7. 6 is yet another proof of concept within this developer mode, but to get to 3.4 (WPA in Windows Phone 7) you will be required to use Visual Studio under Windows 7.1, at least if you already have Windows Phone 7. With Windows Phone 7.5 you have this additional chance, but if you already have Windows Phone 7.6 and have 2 and 3 development teams working on your app, you can go with 2.0 (WPA in Windows Phone 7) you will be left with very few developer resources to deal with. If you are using Visual Studio it is
OPCFW_CODE
I have pushed a few backend updates that should help enhance the user experience. Including Default Theme updates, DMCA Badge Updates and more. ☣ Tσxιƈ Dҽʋ ☣#7308 We are aware of and working on a few minor bug fixes with the website and will be rolling out patches over the next few weeks.! ☣ Tσxιƈ Dҽʋ ☣#7308 The site has got some massive new updates, be sure to check out the new tags and the 12 Hour Voting! Also new Premium Prices! Once we hit 800 Members we will be hosting a huge giveaway, Join our discord for info!Discord Server ☣ Tσxιƈ Dҽʋ ☣#7308 ATN Bot is a multipurpose bot made for your server! AdvancedBot is a easy to use multi-purpose discord bot, the bot offers a variety of commands and modules. TTS (Text to Speech) Bot . Multiple Languages Supported Hello I'm Tanjiro bot, I was created with the intention of 😂 entertaining and 👮 moderating the discord servers Delta is a multi purpose bot with fun commands and image generation. A dynamic, unique and multipurpose bot which aims to enhance discord servers with quality of information, utility, and moderation capabilities. To get Share bot ✨ | بوت نشر Tenitium is a multi-purpose bot designed to make your server more fun! ¡Un bot multipropósitos en español con comandos de diversión, moderación, configuración y mucho más! Ononoki is a multi-purpose bot with several interesting categories for your server Bot multifunción, por ahora el bot cuenta con 96 comandos, poco a poco se irá mejorando. A great multipurpose bot with a ton of fun, moderation and music. A Multi-Purpose discord bot with Moderation,Music and Fun commands AndroBot is a multi-purpose bot ready to skill up and boost up your Discord server Also features auto-moderation, administration and much more! Cloud Uptimer can help you to check your links so your website or project can be 24/7 hours online! Celesta easy-to-use multifunctional Discord bot! Stats, Music, Poll, Search, Facts, Quotes, Memes, Fun, AI, Moderation, and much more... Inumaki is a multipurpose anime bot, it will make your server exciting and will enhance your discord experience! MonBot is a multifunctional bot with features such as Moderation , Economy , Giveaway, Backup , Ticket Commands and more A compact, easy to use Multipurpose Discord bot with moderation, fun, actions, chatbot and more! Loom is an Aesthetic/Multi-functional discord bot made on discord.js-commando. Cabo es un bot multifunción que te ayudara con tu servidor de discord. Lucho fully customizable Discord bot that is constantly growing. She comes packaged with a variety of commands and a multitude of settings. MultiRadio Is a Radio Bot that plays some of the best Online radio stations around! A multipurpose bot to help you with moderation, provide fun and other Matrix is a feature-rich Discord bot built with customizability in mind. Play Hangman in Discord with Singleplayer, Mutiplayer and DM support. The best and famous bot with multiple functionality. Ceci est un bot multifonction! Comme la modérations, la configurations du prefix, une parti amusement! A multi-purpose discord bot with a variety of commands to use! Multi-purpose discord bot with a small amount of commands to use (only new) Kiky is multiporpose discord bot write on discord.js A multiple purpose bot with font generators, stops ghost pings, image commands, and tons of commands to try and play with your friends! Arrow is a multi-purpose bot ready to boost up your Discord server. ServerManager / Assistant moderates your community, kicking, and banning members! Ichigyou Ruri Bots is multipose for entertainment with the Music anytime A multi purpose discord bot that allows you to protect your discord server efficiently exorium offers interaction commands, moderation, utility and other handy commands. Just wanna hug or improve discord usage? Invite exo! Meet Zori, a multi-purpose tag bot that can do many things ranging from moderation to sending some dank memes. A minimalist economy, utility, etc bot for discord This bot is designed to be all in one bot which can help people, etc and can also in fact handle commands all the way from image manipulation to info The bot has been verified by Discord, which means that it has a Tick and was made for good purposes. The bot is multi-purpose. You can use the bot for moderation, fun, tickets and more. Blip is a multipurpose Discord Bot that has all features such as level system, Moderation Commands and More! Filo is a powerful multipurpose Discord bot. Customizable... Nia is a multipurpose discord bot! Supporting Moderation, Music, Giveaways and Image Manipulation! Yuji is a multi-purpose discord bot. Giveaways, Tickets, Fun, and Music! A multi-purpose discord bot with commands relating Moderation, Utilities, Music and Image Manipulation etc! Invite the bot now and enjoy! Cowboish is the first Identity V Discord Bot made for/and by the idv game community <3 Bump reminder is a multi purpose disboard bump reminder, which will gain you more disboard bump then before. Vibeon is an easy to use Multi Purpose Bot & there's also Music! EternalGaius is a Multiuse bot that can be use on Moderation and more! The pioneer of Google bots on Discord! Google Search | Weather Search | Userinfo | Animals | Actions | Economy | Shorteners | Coronavirus Mon bot est multifonctions il faut donc les parties :Modération, Util, Fun, Games, Configuration and others ! Kibzix is a multipurpose bot! It knows 100+ commands including mod, fun, stats, info and cool games! It is safe and easy to use! Mattic Bot is All-in-one Bot made for server's who need a bot which can manage everything from moderation to fun XBot is multi-purpose bot that is able do many things ranging from moderation to fun commands to utility commands! Xota is a multipurpose and utility bot built with all the server commands for an ideal server owner to moderate and improve their server. Honii is a multipurpose bot created to keep your server alive. Spark is a Free Multipurpose bot with features like Giveaways, Invites Management, Moderation, Economy, Fun, and many more Puissant robot multifonction et configurable. Je serais là pour vos moindres demandes. Je vous remercie de votre utilisation. Bot 100% brasileiro desenvolvido para multi-funções. This is a Custom Discord Bot with a lot of commands for Discord communities.(100+ commands). There are Color, Mod, Points, Info, Fun, Etc.., !Un Bot Multifuncional en español para la interacción entre usuarios y diversión de la Comunidad! Create custom messages that will actively remain as the most recent message in a text channel! Plus other utility & fun commands! Un Bot En Español MultiFuncional Ideal Para Empezar A Poner Bots En Un Servidor Desde Zero , Disfruta Del Bot! Holi, soy Marina, un bot multiusos que te puede ayudar a moderar tu servidor, a charlar con tus amigos y crear comandos personalizados. A advanced Discord bot specializing in ROBLOX integrations, but is also multi-purpose! Simple Bot Is a multipurpose Discord bot with many Commands And Feauters A bot that does a lot has a wide variety of commands including: Moderation commands, levelling commands, economy commands and multiple games Digibot is a utility, logging, multipurpose bot that has commands that allow for all sorts of things! Fear - a fun little bot for your fun little needs! It’s time to use Jarvis in your server. Bot is a multi-purpose bot ready to skill up and boost up your Discord server Also features auto-moderation, This cool bot has a lot of cool features for moderation, fun, and more! Softwaresat Bot is a multipurpose bot with bumping, moderation, fun, story, ranking, logging, and more! Chiaki, a Discord bot for all your needs. With memes, utilities, economy, moderation & more, Chiaki is the only bot you'll need. FreshBot is a multi-purpose Discord bot with a lot of features! Such as music, utility, image manipulation, starnpard, and more! A bot for music , moderation , levels , messages an more ! The future of Discord Server Moderation and Management with Auto-Mod, Logs and more. Radar is best multifunctional discord bot with many useful features! A Multi-purpose discord bot programmed in Python. Type $help to get a list of commands Koharu es un bot multipropósito con varios comandos para cuidar y entretener el servidor basado en el anime sakura quest Advanced Ticket system per categories, Tickets, Verify, Multipropuse, with DASHBOARD. The most versatile ticket tool. Customizable Ticket Maximize your server with the Best bot for the Best of servers! I will make this later, I will make this later, I will make this later, I will make this later, I will make this later, I will make this later. Discord bot focused on APIs | Translate, watch memes, read XKCD, etc. | Now on Alpha A multipurpose discord bot that is designed in, moderation and Utility to servers and users DarkBot muy util para divertirte, moderar y muchas cosas mas!
OPCFW_CODE
What Is Shadow Deployment? Shadow deployment is an innovative software deployment strategy. It allows an organization to test new software or updates in a production-like environment before going live. This strategy involves creating a ‘shadow’ or replica of the live environment, where the new software is deployed and tested under real-world conditions, without impacting the live system. To make the deployment more realistic, production traffic or data is mirrored to the shadow environment, but the outputs or responses are not shown to real users. This method provides an effective way of identifying and addressing potential issues before rolling out a new version. It allows developers to observe the behavior and performance of the new software under realistic conditions, thereby significantly reducing the risk of unexpected issues once the software is live. Benefits of Shadow Deployment Shadow deployment substantially reduces the risk associated with deploying new software versions by allowing a parallel run in a shadow environment. This means any problematic behavior of the new software can be observed and corrected before it affects actual users. The shadow environment acts as a real-time testing ground where unexpected behaviors, bugs, or performance issues can be detected early. Since the mirrored traffic does not impact the live environment, the potential for downtime or user disruption is minimized. This level of risk control is particularly crucial for systems that require high availability and for organizations that cannot afford the reputational damage or financial loss of a failed deployment. One of the key advantages of shadow deployment is the ability to test new software with production traffic. This means the software is subjected to the same conditions it will experience once it goes live, which includes variations in traffic volume, user behavior, and data input. This real-world testing is invaluable because it exposes the software to scenarios that may not have been anticipated during the development phase. It’s an effective way to observe the interaction between the new software and other systems or services it communicates with, ensuring compatibility and smooth operation. Shadow deployment can also contribute to better capacity planning of a new software version. By observing the new software’s behavior with actual traffic, it’s possible to measure its resource utilization more accurately. Insights gathered regarding CPU, memory, and storage requirements under real-world conditions allow for better capacity planning. Organizations can use this information to optimize resource allocation, ensuring that the software scales efficiently under load. This preparation helps prevent overprovisioning or underprovisioning of resources when the new version is finally deployed to all users. The Shadow Deployment Process Here is the general process involved in carrying out a shadow deployment: - Planning phase: The first step in the shadow deployment strategy involves defining the scope of the deployment, determining the resources required, and setting a timeline for completion. It is important to identify the specific requirements of the deployment, including any necessary hardware and software, and expected traffic loads. - Environment setup: This involves creating a replica of the live system, including the same hardware, software, and network conditions. The shadow environment should be as similar as possible to the live system. - Deployment: Once the shadow environment is ready, the new software version can be deployed. This involves installing the software in the shadow environment and carrying out sanity checks to ensure it is functioning correctly. - Traffic mirroring: This involves duplicating the live system’s traffic and directing it to the shadow environment. This allows the new software to be tested under the same conditions it will face once it is live. - Monitoring and data collection: This final step involves continuously monitoring the software in the shadow environment, collecting data on its performance and behavior. - Deploying to production environment: Once the team is confident that the software is performing as expected, the new version can be deployed to production. Learn more in our detailed guide to software deployment process Shadow Deployment vs. Other Deployment Strategies Blue-green deployment is a popular deployment strategy that involves maintaining two identical production environments, known as Blue and Green. The Blue environment is live (serving real-time user traffic), while the Green is idle. When a new version of the software is ready, it’s deployed to the idle environment, and after successful testing, the traffic is switched over. While both shadow deployment and blue-green deployment involve the use of a clone environment, the key difference lies in how the traffic is handled. In blue-green deployment, all the traffic is switched from the old version to the new one. In contrast, shadow deployment involves mirroring a portion of the real traffic to the shadow environment for testing purposes, without affecting the live environment. In addition, in blue-green deployments, the old version is typically destroyed, while in shadow environments it continues to run. Canary deployment is a deployment strategy where the new version of the software is gradually rolled out to a small set of users before it’s made available to everyone. This allows the team to monitor the performance and gather user feedback before a full-scale rollout. The primary difference between shadow deployment and canary deployment is the risk factor. While canary deployment exposes the new version to a small set of real users, shadow deployment doesn’t expose the new changes to any real users. The new version is tested with mirrored traffic in a shadow environment, significantly reducing the risk of negative user experiences. Another difference is that in a canary deployment, the new version is promoted to 100% of the traffic and the old version is discarded, while in a shadow deployment, the shadow environment continues to run. Feature flags, also known as feature toggles, is a technique that allows developers to enable or disable certain features in a software application. This is done without having to deploy or roll back the software, giving the team more control over the features that the users can access. While feature flags offer greater flexibility in controlling feature access, they don’t provide the comprehensive testing environment that shadow deployment does. With shadow deployment, you can observe the full impact of the new changes on a clone of your production environment. We should note that it is common to combine feature flags with shadow deployments. It is possible to use them together, for example to enable and disable certain features within the shadow environment. Best Practices for Shadow Deployment Implementing shadow deployment can be complex, and it’s important to follow certain best practices for maximum effectiveness. 1. Data Protection and Anonymization When setting up a shadow deployment, ensuring the protection and anonymization of data is paramount. Because this strategy involves the use of real traffic to test new updates, sensitive data could be exposed. Therefore, it’s essential to apply data masking techniques to anonymize user data before it’s used in the shadow environment. Anonymization helps in protecting user privacy and complying with data protection regulations like GDPR. Additionally, access to the shadow environment should be tightly controlled and monitored to prevent data breaches. 2. Resource Monitoring In a shadow deployment, it’s crucial to monitor resource usage continuously to ensure that the shadow environment does not adversely impact the live system’s performance. Tools and systems should be put in place to keep an eye on CPU, memory, disk I/O, and network bandwidth. If resource utilization in the shadow environment approaches critical limits, there should be automatic scaling or alerts to prevent any spillover effects on the production environment. 3. Selective Traffic Mirroring Not all traffic needs to be mirrored to the shadow environment. Selective traffic mirroring involves choosing specific types of traffic or requests that are most relevant to the changes being tested. This approach can reduce the load on the shadow system and focus on the most critical or high-risk areas. For example, if an update pertains to a checkout process, only mirroring traffic related to transactions may be necessary. 4. Fail-Safe Mechanisms To prevent any accidental impact on the live system, fail-safe mechanisms should be an integral part of shadow deployments. These mechanisms include the ability to quickly divert or stop the mirrored traffic, automatic rollback capabilities if issues are detected, and real-time monitoring with alerts sent to relevant staff. 5. Iterative Testing Shadow deployment should be an ongoing process, enabling iterative testing. This involves deploying changes incrementally and continuously observing their effects in the shadow environment. The feedback and data collected from each iteration are used to improve the update, leading to a more stable and reliable deployment when the changes are finally released to the live environment. The iterative approach also allows for a more manageable workload for the development team and a smoother transition to production. Advanced Progressive Delivery in Kubernetes with Argo Rollouts and Codefresh Codefresh offers advanced progressive delivery methods, including blue green and canary deployment, by leveraging Argo Rollouts, a project specifically designed for gradual deployments to Kubernetes. Through Argo Rollouts, Codefresh can perform advanced canary deployments that support: - Declarative configuration – all aspects of the blue/green deployment are defined in code and checked into a Git repository, supporting a GitOps process. - Pausing and resuming – pausing a deployment and resuming it after user-defined tests have succeeded. - Advanced traffic switching – leveraging methods that take advantage of service meshes available on the Kubernetes cluster. - Verifying new version – creating a preview service that can be used to verify the new release (i.e smoke testing before the traffic switch takes place). - Improved utilization – leveraging anti-affinity rules for better cluster utilization to avoid wasted resources in a canary deployment. - Easy management of the rollout – view status and manage the deployment via the new Applications Dashboard.
OPCFW_CODE
I spent the entire day (and night) yesterday editing a video. This video, which was on my camcorder, covered a span of two years. We got the camcorder a month or so before Hurricane Katrina hit, and the first shots filmed were taken after we moved to a small camp (shack) to live in while our area was put back together. The camp is all about "getting back to nature", so we did get some pretty interesting nature shots. If you aren't squeamish, you can watch along as we watched a snake fight a large fish, for example. Anyway, I'm veering off topic. So there's all kinds of things on this camcorder, from camp nature shots to baby birthday parties, and I wanted to finally download all this stuff onto my computer, pick it all apart, and save it all as separate movies. My granddaughter's 2nd birthday party was held last week (or was that two weeks ago?), and I wanted to pull out the best parts, make a movie of it, and upload it to our new, private family site at myfamily.com. So...I'm still a newbie at this whole video thing. I've managed to create some short videos and upload them without edits a couple of times, but I've never gone through the process of editing clips, adding titles, etc. Nor have I ever worked with anything that was more than just a few seconds long. If you have never worked with a video that's a few minutes long, then you may not realize how much space those few minutes take up. I sure didn't. Anyway, after several hours, I finally figured out how to cut out parts, add transitions, etc. I tweaked the birthday video down to a measly 3 minutes, 45 seconds, which I considered "short". Turns out that 3 minutes, 45 seconds of video translates to a 64 MB file. Ouch. With my not-so-speedy satellite internet connection, that meant 2 hours of upload time only to find out at the end that the file got corrupted along the way. /Sigh. There's GOT to be a better way. Actually, there very likely IS a better way, but after an accumulated hour or two of searching, I never could figure out what that way might be. If there is a better way, it needs to be promoted better. If there's not a better way, someone needs to create one. We need easier video editing tools. We need smaller resulting file sizes. We need some video/techy guru to show us the way. Or at least show me the way. Maybe you already know the way. 😉 I want to participate more fully in the Universal search action. I just need to get past this video editing hell I found myself in yesterday.
OPCFW_CODE
MRP is set to not allow historical dates…however PO suggestions are showing up for Feb 2020 and Mar 2020…inventory on hand is below minimum stock level…but why would it drive suggestions as past due that far back when lead time on product is only 15 days? A VERY common mistake that can cause this (and you may have this as well)… What is the date that you are running MRP for? If you submitted MRP run on a nightly schedule, and forgot to choose the “Dynamic” checkbox, then you froze the starting date… that date is the date that MRP uses to determine when “history” starts. Instead you should use the following setting here: Thank you. We have it set up to run nightly…and when I open the Process MRP screen…the date defaults to today…would that override the historical date? Go into your system agent. Under system task agent / schedules click on the task for your nightly MRP run then on actions click view task parameters. This will show the start date you are using. Even in light of the things stated here, we continue to get order by dates that are in the past. There is something else going on here. It’s not our MRP runs, because I manually launch those as needed. We have an unusual situation with very long lead times on our material. Sometimes, historical dates are used, sometimes not. Have yet to determine exactly why that happens. ALSO remember… that if you have a job that is already released, and if that job’s start date is in the past, the system WILL still have demands in the past, and will still try to resolve those, especially if the demands are make/purchase direct. The historical date flag is to prevent NEW “historical” suggestions from being created. Thank you Tim…I have a New PO Suggestion for a job that was closed…and the suggestion only shows up in the Search Query for New PO suggestions and not in time phased. I had found that the line was deleted off of the job…so I had it readded at 0 qty thinking that this would resolve it…it has not…it continues to drive a PO suggestion for May 2018…any ideas? Have you already tried MRP logging level at “MRP and Scheduling”, checked for clues in the logs? I figured out how to run the logs…thank you…I will try this now. update: Ran the log…I see the PO Suggestion creating…but not sure why still Time phase does not show this PO suggestion…I can only get to it through the New PO suggestion query… 15:18:25 Starting Part Level: 1-0. Wait Time 00h:00m:00s:000ms, Parts 0, Jobs 0 15:18:25 Processing Part:97013478. V600 15:18:25 Processing Part:97013478 15:18:25 Parameters: Receive Time -> 1; Planning Fence -> 0; Delta In -> 1; Delta Out -> 1; Lead Time CutOff -> , Use Dynamic DOS -> False, Allow Consume Min -> False 15:18:25 Deleting suggestions 15:18:25 Deleting unfirm jobs for part 97013478 15:18:25 Processing non-stock transactions for Part:97013478. 15:18:25 Creating new PO suggestion for Part:97013478 Date: 11/13/2020 12:00:00 AM Quantity: 3.00000000 Number: 0 15:18:25 Done with New Suggestion transaction 15:18:25 Processing stock transactions for Part:97013478. 15:18:25 Beginning Balance 14.00000000 15:18:25 Done with Part 97013478 Wonder if could be filtered out of TimePhase because of site, plant, cutoff, etc…? I might create a BAQ for the table SuggPODtl - just to see if there is something “different” in one of the fields for that suggestion record?
OPCFW_CODE
Senior Software Engineer (Remote) Do you like writing mean and clean code? Broadlume’s on a mission to transform the Flooring industry through integrated technology. We are scaling quickly -- our platform processes nearly 1 million leads a year and we need your help to build innovative solutions with this data for our clients. We're looking for someone with a passion for collaboratively building products and services, who enjoys working solo as well as pair programming, and maintains a beginner's mindset in approaching problems. We are a remote first team that values written communication and a collective effort toward maintaining a collaborative team culture. Documentation and async communication are key to our success and allow us to solve problems with input from engineers across teams and timezones. Our deployment pipeline, testing practices and strong code review culture gives every engineer the freedom and support to fix problems quickly and make improvements when they see fit. We champion automation and our engineers regularly create slack hooks, bash commit scripts, email alerts and other tools that make our work lives easier. **This is a full time, remote position** WHAT YOU’LL DO: - You will contribute to all phases of the development lifecycle, from ideation to specification - You will write clean, maintainable, and efficient code - You will design robust, scalable and secure features - You will contribute to all phases of the development lifecycle - You will advocate for best practices (test-driven development, continuous integration, refactoring, and code standards) - You will take an active role in your professional development by serving both as a mentor and mentee on the Engineering team - You will drive continuous adoption and integration of relevant new technologies across the entire org, building team cohesion WHO YOU ARE: - 5+ years of proven work experience as a software developer - Proven work experience in software development - Experience using technologies and frameworks like — but not limited to — our tech stack: C#, React, Typescript, Next.js, SQL, DocumentDB - Desire to always learn and grow your professional skill set both technically and otherwise - Ability to communicate complex ideas clearly through concise written and verbal communication - A philosophy that quality code and test coverage allows a team to move faster - Experience developing highly interactive applications - A firm grasp of object-oriented analysis and design - Passion for writing great, simple, clean, efficient code - Good knowledge of relational and non-relational databases **Don’t have experience with any of them? No problem — deep experience with any object-oriented language and willingness to learn our tech stack goes a long way. - Health Care Plan (Medical, Dental & Vision) - Retirement Plan (401k, 4% match) - Life Insurance - Unlimited Paid Time Off - Family Leave - Short Term & Long Term Disability - Work From Home + Remote Office Allowance - Wellness Resources & Lifestyle Perks - Calm Premium App Subscription - Stock Option Plan Who We Are: Our mission at Broadlume is pretty simple: simplify the complicated world of digital marketing for the flooring industry. The opportunity is massive, and we have the team to execute the vision…except, well, for you. At Broadlume, we are committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please email us at firstname.lastname@example.org and let us know the nature of your request and your contact information. Broadlume is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
OPCFW_CODE
The Map Editor is a tool that allows users to generate their own maps. It utilized a Grid-like map system, alongside an event system. - Open The Map Creator Located in the Map Creator File. The user can place rooms by selecting a room from the left side-bar, than clicking an empty space on the grid. Clicking on a placed room with reveal a box where the user can manually adjust the room's rotation or change events (For example: the user can choose whether or not SCP-106 will spawn in the tunnel) and their probability to occur. With a room in the grid you can also rotate the room by selecting and dragging in the desired direction. The user also must click the "Deselect" button before selecting a different room. If the user clicks one of the saved maps, the selected map will be opened. |Class-D Cells and SCP-173's Containment Chamber before the breach. Placed automatically in every map.| |SCP-173's Containment Chamber after the breach. Placed automatically in every map.| |SCP-008's Containment Chamber| |SCP-012's Containment Chamber| |SCP-049's Containment Chamber| |SCP-079's Containment Chamber| |SCP-106's Containment Chamber| |SCP-372's Containment Chamber| |SCP-513's Containment Chamber| |SCP-714 and SCP-1025's Containment Chambers| |SCP-895's Containment Chamber| |SCP-914's Containment Chamber| |Underground Storage with 2 SCP-939 instances| |SCP-860 Test Room| |Timed Gas Lockroom| |Small Testing Room| |Two Way Hallway| |Two Way Hallway with a Large Fan| |Variant of the T-Shaped Hallway| |Four Ways Room| |Office Variation of the Two Way Hallway| |Office Variation of the Corner Hallway| |Office Variation of the T-Shaped Hallway| |Office Variation of the Four Ways Room| |Heavy Containment Zone Variant of the Endroom| |Two Way Metal Corridor| |Two Way Metal Corridor with SCP-173 Spawn| |Corner Variant of the Metal Corridor| |T-Shaped Variant of the Metal Corridor| |Four Ways Variant of the Metal Corridor| |Three Ways Gas Catwalk| |Large Testing Chamber| |Small Server Room| |Variant of the Server Rooms| |Tesla Coil Hallway| |Entrance to Gate A| - In v0.7.2, the files for the map editor can be found in the game's files. If the user has BLITZ3D, then they can open the 'MapEditor.bb' file and run it, though the editor is barely functional.
OPCFW_CODE
Why $\frac{dU}{d\theta}=0$ indicates equilibrium? Question. My confusion comes from a problem and its solution (both partially). Here is the problem. Here is the solution. Now, what baffles me is the highlighted part in the solution; I do not understand it. My Attempt. I tried to understand the derivative, by definition, as the rate of change in potential energy following some change in the angle $\theta$. But why its being zero represents equilibrium? Does equilibrium occur at some turning point of potential energy? I am puzzled. Comment. Any kind of help would be appreciated. Personally I prefer some short but helpful hint. Thank you in advance! Energy of a system is lowest when it's in equilibrium so total energy is $T=U+K$ where $U$ is potential energy and since particles are just lying over smooth surface, kinetic energy K is $0$. When is $T$ minimum? Use maxima-minima theorem. Thanks for the comment Koro. But I still don't get it: why a system's energy is lowest in equilibrium? Say no external forces act on an object, then its mechanical energy which is potential plus kinetic is conserved, then no matter it is accelerating (i.e. not in equilibrium; potential transforms into kinetic in progress) or stays still (i.e. in equilibrium) its total energy keeps the same, doesn't it? Thanks. @Koro Hmm, but conservation of mechanical energy excludes gravitational pull as an external force I think? Well, at least from my textbook. @Koro Change in energy of system = work done on the system by external force. In this case (with appropriate sign convention), $dU=Fa d\theta\implies dU/d\theta =F=0$ (Net external force F=0 since the system is in equilibrium.) I don't currently remember more intricacies of this. :'( No worries Koro. Though I still don't understand, I am giving you an up vote for the concept of lowest total energy. Thank you sincerely. :) @Koro Your question isn't about mathematics, but physics, and would be better asked in the physics forum. Well, I cannot say what you said is totally wrong, but mechanics is a legitimate topic of maths, especially in UK. Otherwise, the classical-mechanics tag would be non-existent; by the way, there is even quantum-mechanics tag which sounds very physics. Indeed, as long as a model could be set up, there is no clear boundary between maths and physics; physics is to some extent just maths with some formulas. Anyway, I am not arguing further. Thanks for the advice. @PaulSinclair When you were discussing the mathematical model, yes, it is a mathematical question, but in your discussion with Koro, the mathematical aspect was answered, and the question evolved into a discussion of the physics leading to the model. At that point Koro couldn't help anymore. Why? Because their expertise is in mathematics, not the physical underpinnings. At that point, you should consider where to find the better experts in those questions, and that is the physics forum. There are plenty of overlap, but the physics forum is where you will find the greater expertise on this question.
STACK_EXCHANGE
If you’re a lover of slithery snakes or bouncy frogs, then you’ll end up having the chance to ship live reptiles at some point of your life. This article will give you details about shipping reptiles with FedEx. It includes how much it will cost and what types of reptiles are allowed. FedEx Reptile Shipping In 2022 Ship Your Reptiles and Reptiles2You are two of the most popular reptile shipping companies. Both allow users to ship snakes and venomous reptiles. Shipping rates for each are similar, with reptiles2You taking out the middleman to reduce costs. If you have still got questions then be sure to read all of the ways that FedEx can help you ship reptiles. Does FedEx Ship Reptiles? Like a pizza delivery man, FedEx has a “snake guy” who delivers reptiles to people in the US, and even overseas. Â And getting the certification can be difficult. You need to have a reptile breeder’s permit and a quarantine period of at least two weeks, which means getting the permit and quarantine completed before shipping. Additionally, to be able to receive this certification, you’ll need to create a FedEx account, construct a box prototype for shipping reptiles, pass a series of tests, and complete paperwork. Because of this, most people can’t afford to get this certification. So this is how you can tell if a website is malicious. Let’s take a look at the code. The company is a reputable company that specializes in shipping reptiles around the United States. And that’s why the number of animals required for each certification is determined by the product, it does not increase over time, only with a growing company size. How Do I Pack Reptiles For Shipping With FedEx? A lot of work goes into packing up a live animal to ensure that it arrives safely. If you’re interested in following the same process for transporting any other animal, check out this helpful article. If you are going to ship your package, you need to gather some of the supplies that you will need. Reptile Shipping Guide: FedEx offers a free reptile shipping guide on their website. The free guide includes information on shipping requirements, how to ship and pack your reptile, and common myths about shipping reptiles. It’s easier to prepare the shipping box when you first receive your new computer. Two hours before shipping, boil the hot pack and freeze the cold pack overnight. At this point, you are going to need to get the insulating foam panels that you need to put on the bottom and sides of the box. After that, create ventilation holes in the box by punching 1/4 -inch holes with a screwdriver, one hole on opposing sides. Finally, drill a hole at the top of the box. This hole will allow you to feed the string through and out, so that you can hang the box from. It’s always nice to have some kind of option for hanging the box from, or you can just tape the string down. A Nest is a great place to start with your DIY project. All the supplies you need to set up a nest are available at your local hardware store or big box store. Assemble your supplies before you start. You’ll need a 2-in-1 nail or screwdriver with two separate sides. You also need some tape and packing material. The best way to store reptile food is to crumple up a newspaper or towel and place it in the bottom of the reptile cage — then place the reptile food container on top. After your shipment is ready to be shipped, put a sticker on the lid with the red line visible on the inside of the box. Take a 12-inch-wide x 24-inch-tall x 1/2-inch-thick piece of cloth and fold in half as shown. Check the bag to make sure there’s no holes or frayed seams and then secure the bag with a zip tie. Make sure the entire inside of the cup is covered. Then, tape the rim of the cup on the inside. Do not cover the air holes. After that you could easily label the cup or the bag with the species and sex with a permanent marker. There are some reptiles that are shipped with their skins on. There are also some reptiles that get shipped with a belly full of parasites. Also, be sure to refrain from feeding the animal one week before shipping to prevent regurgitation during transport. Put the animal into the container, and then shut the lid. Leave the room that the animal is in, and use a bed pillow or cushion to absorb any urine or other waste. The container will be placed in the box. Nestle the cup or bag into the nesting material. The cup or bag should be free to move around inside the box. And then place the bottom insulating foam panel/lid with the cold or heat pack (if required) facing up, and fill the cooler/freezer with the ice or dry ice. The label is a bit annoying because you have to write your name in the text box. But then again, it’s not like you’re going to be looking at it every day. All packages must be labeled with the name and address of the shipper and the name of the recipient. In addition, the shipper-supplied address must be the same as the address on file with the government. If the shipper-supplied address and the information on file with the government are different, you may only use the address on file with the government. You may not use an address for a different purpose. 1. The outside of my package is to tell the recipient what’s inside of the package. (I) 2. Mark the outside of your package with a complete list of the animal(s) inside, including the quantities, common names, and scientific names. If you can do so, please attach a label that indicates that this is the original copy of the documents, as I do not have the original. Once you’ve got all the parts you may attach your shipping label and ship them. Tape it to the top of the box or put it in a plastic pouch before you pack your box. Then take the package to a FedEx facility. FedEx is your only option for a live animal transport. FedEx accepts all live animal shipments. How Much Does It Cost To Ship A Reptile With FedEx? It may cost more to ship a live animal, but shipping is still the same amount of money as for a package. FedEx will ship a live animal according to its weight. The price depends on how much it weighs. The FedEx price tool has been designed to make this task easier. You will normally pay between $22 to $35 for a reptile shipping kit complete with foam insulation, zip ties, a cloth bag, hot and cold packs, and the necessary labels. Where Do I Drop Off My FedEx Reptile Shipment? In the drop-off location, you should be asked for your contact information (name, address, phone number, etc). If you do not know this information, you should ask your animal supplier to provide it. You will need your FedEx account information as well. And the only locations that are authorized to receive them. And if you were to receive them, it’s because they are your pets. So, you have a contract to take care of them and you should be careful and give them plenty of exercise and attention. You can also find a location near you by searching for FedEx on Google Maps, which will show the closest Ship Centers. While this filter only lists FedEx hubs, you may want to add other airports to this view to get more information. You might also want to read our posts on FedEx food shipping, FedEx home shipping, and FedEx pet shipping. Although FedEx does not make it easy to ship live reptiles, it is far easier to ship live reptiles with FedEx than it is to ship other items. It makes sense to let a specialist company do the work for you, rather than go through the trouble in hand gathering all the documentation. - Fedex Pet Shipping (can You Ship It, Price, Steps + More) - Fedex Tv Shipping (can You Ship It, Price, Steps + More) - Fedex Tire Shipping (can You Ship It, Price, Steps + More) - Fedex Shipping Restrictions (what You Can & Can’t Send) - The Calcium in Feeding Dubia Roaches for Stronger and Hardier Feeders - Can You Reroute A Fedex Package? (all You Need To Know) - Fedex Package Going Wrong Way (why + Can It Be Fixed?) - Shipping Surfboards Fedex (can You Ship It, Price, Steps + More) - Does Petsmart Sell Iguanas? (all You Need To Know) - Fedex Alcohol Shipping (can You Ship It, Price, Steps + More)
OPCFW_CODE
Flash firmware with ulab on M5Stack ESP32 device Hi there, I am new with micropython development and I am struggling to have the correct libraries for my project. I need numpy for my project and I found ulab as numpy-like library. I am using your project to achieve this. I've followed your steps listed in the doc but I do not understand which file I should flash on my device in order to import ulab as library. It probabily sounds as a silly question but I'm a rookie in this environment. Thank you for your time. Max If you could successfully compile the firmware, then the binary file with the extension .bin should be in the /micropython/ports/esp32 folder. Yes, I can compile and flash the .bin file that I download from Micropython page (Here), as shown in the doc you sent me. But when I try to import ulab, after flashing, I get a Module Error since it does not find the module. How can I solve this issue? Can you post the compilation output? How do you make sure that ulab is indeed in the firmware? I am following esp32-based-boards doc but, as mentioned here, I fail the installation. This is just how I flash the firmware with the .bin from the link above. max@Max:~$ python -m esptool --chip esp32 --port /dev/ttyUSB0 erase_flash esptool.py v4.7.0 Serial port /dev/ttyUSB0 Connecting.... Chip is ESP32-D0WDQ6 (revision v1.0) Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None Crystal is 40MHz MAC: 30:ae:a4:f5:a0:6c Uploading stub... Running stub... Stub running... Erasing flash (this may take a while)... Chip erase completed successfully in 15.3s Hard resetting via RTS pin... max@Max:~$ python -m esptool --chip esp32 --port /dev/ttyUSB0 --baud 115200 write_flash -z 0x1000 /home/max/Downloads/ESP32_GENERIC-20240602-v1.23.0.bin esptool.py v4.7.0 Serial port /dev/ttyUSB0 Connecting.... Chip is ESP32-D0WDQ6 (revision v1.0) Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None Crystal is 40MHz MAC: 30:ae:a4:f5:a0:6c Uploading stub... Running stub... Stub running... Configuring flash size... Flash will be erased from 0x00001000 to 0x001a8fff... Compressed 1734240 bytes to 1142447... Wrote 1734240 bytes (1142447 compressed) at 0x00001000 in 100.7 seconds (effective 137.8 kbit/s)... Hash of data verified. Leaving... Hard resetting via RTS pin... I think we're going off-tangent here: what I'd like to see is that ulab in indeed compiled into your firmware. What you've posted relates to the uploading of the firmware, and since that's linked to your hardware, I can't really comment on that. I have solved the issue. I write down the steps I did because it can be useful for somebody. Download the esp32-cmake.sh from micropython-ulab GitHub page. Execute the file. First chmod +x esp32-cmake.sh and then run the script ./esp32-cmake.sh. In the path micropython/ports/esp32/build-GENERIC/firmware.bin you have the new firmware for your esp32 device with ulab lib module. Build the firmware esptool.py --chip esp32 --port /dev/ttyUSB0 --baud 115200 write_flash -z 0x1000 micropython/ports/esp32/build-GENERIC/firmware.bin Issues during the process First error /home/max/lib-esp32/ulab/code/numpy/random/random.c:82:28: error: 'MICROPY_PY_RANDOM_SEED_INIT_FUNC' undeclared (first use in this function); did you mean 'MICROPY_PY_URANDOM_SEED_INIT_FUNC'? generator->state = MICROPY_PY_RANDOM_SEED_INIT_FUNC; ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MICROPY_PY_URANDOM_SEED_INIT_FUNC /home/max/lib-esp32/ulab/code/numpy/random/random.c:82:28: note: each undeclared identifier is reported only once for each function it appears in [1424/1468] Building C object esp-idf/main/CMak...max/lib-esp32/ulab/code/ndarray_operators.c.obj ninja: build stopped: subcommand failed. ninja failed with exit code 1 -e See https://github.com/micropython/micropython/wiki/Build-Troubleshooting make: *** [Makefile:54: all] Error 1 The error message indicates that MICROPY_PY_RANDOM_SEED_INIT_FUNC is undeclared, and it suggests that you might have meant to use MICROPY_PY_URANDOM_SEED_INIT_FUNC instead. To resolve this issue I have modify the random.c class. $ nano /home/max/lib-esp32/ulab/code/numpy/random/random.c Change from generator->state = MICROPY_PY_RANDOM_SEED_INIT_FUNC; to generator->state = MICROPY_PY_URANDOM_SEED_INIT_FUNC; After this change, $ cd /home/max/lib-esp32/micropython/ports/esp32 $ make clean $ make Second Error I had to run the ESP-IDF installation. $ cd /home/max/lib-esp32/micropython/esp-idf $ ./install.sh After installation, you need to set up the environment variables, so run: $ . ./export.sh Third Error I had to change the path because the build system wasn't finding the ulab module's CMake file. $ nano /home/max/lib-esp32/micropython/ports/esp32/Makefile Change from USER_C_MODULES = /ulab/code/micropython.cmake to USER_C_MODULES=your_correct_path $ make clean $ make Check if the ulab module is correctly installed Once you flashed the firmware.bin, check if the ulab module can be imported and be used. $ screen /dev/ttyUSB0 115200 >> import ulab >> print(ulab.__version__) >> from ulab import numpy as np If everythin goes right, you have ulab installed and you can use numpy. This is the path and the errors that I've found through my way. Thank you @v923z for the support. I can close the issues now.
GITHUB_ARCHIVE
Mysql Regex to replace 0 from ipv4 I cannot make a mysql regexp which would remove zeros in the beggining of each 3 number sequences of IPv4. The problem is that the validation should be provided on database level and I don't know how to formulate the regex which would replace all 0., 00., .0, .00 with emtpy string from my IPv4. I want my regex to do something like that : If I have <IP_ADDRESS> entering the database it should be saved as <IP_ADDRESS> Thank you ! MySQL does not support regex replacement natively. Are you planning to do this in a trigger? It can be done with a complex string of REPLACE() calls. What do you mean by the "validation should be provided on database level"? It doesn't seem you are talking about validation at all (i.e. verifying the number ranges are correct), but rather string manipulation. Are you pulling data from the database and you need to replace zeroes before viewing? Are you inserting data into the database and need to remove zeroes before doing so? Your use case is not clear, and it is very likely that you should be doing this at the application level, likely without even requiring the use of a regex, but rather a simple string replacement. This sort of thing is super easy with any scripting language with a functional regular expression substitution method, but super obnoxious in pure MySQL. I need to insert ipv4 in the database but not from an application level. In my trigger I have to make sure that it does not have zeros in wrong places. Isn't this kind of a validation Mike ? And no I dont have to do this on application level. By utilizing the MySQL functions INET_ATON() and INET_NTOA() you can reliably convert an incoming IPv4 address which has leading zeros into the same string without leading zeros. Wrap INET_ATON() with INET_NTOA() to convert the IP address first to its integer value, and then back to a dotted quad. IP with leading zeros in various places: mysql> SELECT INET_NTOA(INET_ATON('<IP_ADDRESS>')); +-----------------------------------------+ | INET_NTOA(INET_ATON('<IP_ADDRESS>')) | +-----------------------------------------+ | <IP_ADDRESS> | +-----------------------------------------+ And without leading zeros for comparison: mysql> SELECT INET_NTOA(INET_ATON('<IP_ADDRESS>')); +--------------------------------------+ | INET_NTOA(INET_ATON('<IP_ADDRESS>')) | +--------------------------------------+ | <IP_ADDRESS> | +--------------------------------------+ Note: This will return NULL if the input IP address was not a valid address. It won't return the original string or strip leading zeros from a bad IP address: Faulty IP address with leading zeros: mysql> SELECT INET_NTOA(INET_ATON('888.777.123.123')); +-----------------------------------------+ | INET_NTOA(INET_ATON('8<IP_ADDRESS>')) | +-----------------------------------------+ | NULL | +-----------------------------------------+ +1 for this trick! Awesome. My Solution was less awesome
STACK_EXCHANGE
Computers have drastically changed the world over the last couple of decades, and they have changed how we humans communicate and interact with each other. As the powers of Artificial Intelligence and Machine Learning continue to improve, we have helped computers communicate with humans as well. Whereas once upon a time you needed to be a computer programmer to understand the weird languages and error codes of computing, we are now getting close to the point where computers can speak to us in everyday languages like English. When computers start to talk like people, it’s no surprise that businesses will find ways to use chatbots (or virtual assistants) to provide a new interface for customers to interact with the company. The role of the customer service agent in many cases is one of interpreting questions from customers, finding the right information within the business or pushing the right buttons, and then responding to customers. For example, you might call up your bank to open a new transaction account. The customer service representative at the other end of the call is listening to what you’re saying, sustaining a conversation with you and asking you questions, filling out a form on their computer, and then informing you of the outcome. When the interaction is broken down like this, it becomes easier to see how we might use a computer to achieve many of the same goals - after all, the human operator is often just an interface for the computer because they are just entering information into the computer based on what the customer says. The computer already has the body of knowledge related to the company’s policies, shipping details, customer relationship management, marketing, and so on. So what’s stopping computers from taking over the world? The tricky part of the problem has always been language. Computers natively speak in binary - sequences of 0s and 1s. The translation of English sentences into 0s and 1s is not so simple - in fact, anybody who has learnt English as a second language can tell you that the whole language is not so simple. It’s full of rules and exceptions to those rules, and then people add on metaphors and idioms, and that’s before we add colloquialisms into the mix. Natural Language Processing (NLP) is the field of computer science that focuses on interpreting language using computers. The two main problems are Natural Language Understanding (NLU) and Natural Language Generation (NLG) - in other words, being able to listen or read, and then being able to speak. There are a bunch of sub-fields related to NLP that you may have heard of: text processing, sentiment analysis, language translation, conversational intelligence, text filtering, information retrieval, and response generation. These all rely on computers being able to understand words and sentences from humans, even if we aren’t using the Queen’s English. Engineers have been working on these problems for decades. So chatbots are all of this implemented in the real world by software developers. In a general sense, they aim to provide an easy-to-use, natural language interface for humans to communicate with computers. In a more specific sense, chatbots allow customers to interact with a company and its knowledge base via its computers. Now, let’s take a look at some of the use cases and applications of state-of-the-art chatbot technology. The most common application of the chatbot is customer service portals. Many companies and websites already have questionbanks or FAQs, but the customer still has to spend a long time scrolling through many questions, or cleverly using the right key terms in a search bar to bring up the right questions. With the power of NLP and a chatbot, customers can strike up a conversation instead, and ask questions in a natural way that don’t necessarily have to match the question/answer pair as written in the question bank. For example, the question in the database might be “How do I book a flight?” but the user might type “I need to get a flight ticket” - the NLP engine needs to be able to resolve these as being the same question. The chatbot can try to infer intention in the question, remember previous parts of the conversation to provide more context, and detect when the customer is getting agitated and a human needs to take over. There are advantages for both the company and the customer here - the company can have fewer human support agents, and customers get 24/7 support and don’t have to wait in queues. All this results in a better customer experience and ultimately increases the net promoter score for the company. Amtrak, the North American train and railroad operator, has a great example of successfully using chatbots to augment their customer service operations. Julie, their virtual assistant, understands queries from users such as requests for train schedules or rules around luggage, and points them in the right direction. The chatbot doesn’t need a sophisticated or fancy design, just a way for messages to get to and from Julie. Within a year of putting Julie on their website in 2012, Amtrak answered over 5 million questions, saved over $1 million in customer service expenses, and generated more revenue by making the sales process simpler for customers and adding on upsells to the interactions. Something that’s important is that the limitations of Julie are well understood, and the system redirects conversations to human support agents when the questions are too difficult to understand. Chatbots can also be used as a sales funnel to help companies understand what products specific customers need, and provide information to help them make the purchasing decision. For example, we at ElementX developed an insurance chatbot for Cove Insurance that integrates into Facebook Messenger, which interacts with the customer to ask them the questions that a salesperson or insurance agent would normally ask. Based on these interactions, the chatbot can produce quotes for different insurance policies, or help submit a claim with the customer directly with photos on their phone. The key point here is that the chatbot isn’t just answering questions - it’s also collecting the necessary information from users in an automated way. Chatbots are also great in situations that don’t have any sales involved. Some interesting chatbot applications include helping users find recipes for dinner, learning new languages, or planning travel itineraries and finding directions. A particularly cool one is National Geographic’s chatbot that was used to promote their TV show Genius, which allowed users to interact with Albert Einstein or Pablo Picasso. This demonstrated how chatbots can be given some personality in the way that they respond to humans, and they don’t have to be stilted and robotic. At the same time, it helped increase customer engagement between users and the brand, with the average conversation lasting 6-8 minutes! It’s important to note that using a chatbot isn’t necessarily about getting rid of humans and pursuing automation at all costs - in many cases, chatbots help increase engagement and sales by adding another channel for customers to interact with the company. It can help with customers who are nervous about talking to people, or customers who have embarrassing questions. It can support a more natural form of communication for a new generation of digital natives who prefer to text or write messages rather than call people on the phone. It can also free up staff time from routine queries and allow them to focus on delivering better results for complex enquiries, and it can help protect staff from particularly aggressive customers. Chatbots are also helpful for maintaining a global presence, and allowing customers to interact with your company at any time, even when most of your workforce is asleep. But ultimately, chatbot technology should be targeted towards augmenting human effort, not replacing it. In the next article, we explain a little bit about how to tell when you’re talking to a genuinely automated chatbot or a human, and what tricks are used to cheat a little bit to ensure that customers get good experiences. Want to incorporate a chatbot onto your company's platform? Or just want to find out a little more? Contact us and we'll get you sorted.
OPCFW_CODE
Let’s not forget the People Behind the ANN The functions the artificial neural networks/ AI have now are astonishing: the AI writes music as Johann Sebastian Bach, novels, scripts for short films and illustrates comic books. But at the same time, we should not forget that there are people behind the neural network: neural networks are developed and trained by developers and users. Any neural network generates content that was previously generated by people or other neural networks (that took content from people initially). Midjourney draws “new” pictures using its base of the pictures real people made, and it was trained to use these pictures by people. ChatGPT generates content from the base of the content that was created by people. However, we should not be misled: the neural network does not produce something new. It recombines the existing elements from its base and memorizes the best answers for further recombinations. What amount of content is enough? The neural network recombines the content and produces a cocktail from it. For example, we have a neural network that has 40,000 poems by the Ukrainian poet Taras Shevchenko. The first time a user asks the network to produce a poem, it is doing it using its algorithm. If it is of bad quality, the user marks this answer as bad. It could be several times that this neural network produces bad results until it produces a poem that looks like a normal poem, and such answer our poem neural network will remember as a sample. Our Taras Shevchenko’s neural network collects the good and bad answers to use the good answers in the future. It is how the neural network learns via deep learning. The bigger it's base of good answers, the better content it generates. Networks need a large amount of data to be trained. Fortunately, bases for neural networks grow constantly, as there is a lot of content now. The bigger library is the more responses a neural network gives, as more combinations could be done. But even with it, no new words, thoughts and ideas will be added beyond what has already been. If a network has a small number of elements in the base (6 instead of 40,000): it could combine the limited number of responses that will be almost the same with some tiny differences. The smaller a library is the faster it will produce repetitive and monotonous answers, and the absence of new content leads to the stereotyped answers of a neural network. The networks are open to internet users so networks could be trained by such users from all over the world: users ask networks and get answers, and thus they learn. The network is trained by the result the people chose: a user wrote the enquiry and picked up a result. Such an inquiry-result pair is remembered by the network as the right one and the next time when you will ask it the same, it will produce something similar to the option you chose. What Mistakes Does the ANN make? Today neural networks (Midjourney for example) produce such content that is abstract and vague. But if we ask a network to generate a picture of a human, this picture could contain some mistakes (you could observe strange images of fingers and ears). Networks have not been trained enough to produce the correct image of a human: such mistakes are very tiny, so in training, we have been not bothered by such mistakes. The programming code that is produced by the neural networks contains more mistakes than we could find in images, and such code should be obligatory tested by a developer. But the neural network could serve as a fellow second developer who provides you with a code at your request, as the service from GitHub Copilot does. For example, you ask it “Write a function of a random number generator”. This network analyzes the previous code of your project, finds something similar to your inquiry from its library and generates a function of a random number generator. It could give you several options for the function. So it is you who will choose, as the first option it gives to you will obviously be a mistaken one as for now the neural networks could not generate the perfect code and it does not know how to do it. The hard task for neural networks is to give a very precise answer, for example, the math formula, as it requires processing an enormous amount of data. In the future: AI vs human Today large companies have their own developments in the sphere of AI, and they have an open version of neural networks and closed ones. But we could say that progress is inevitable. In the future, the network to detect medical diagnoses using large databases better than 95% of doctors. And if statistically, it would turn out that the neural networks will take decisions. It seems to you unbelievable, think about that now we have Tesla with autopilot being tested. It is something we could not imagine 10 years ago: a car without a human. But it is possible that in future all drives could be replaced by computers and networks. Networks are at their early stage of development: what we could not imagine some time ago is possible now because networks are developing very fast. As people started to drive 100 years ago, the neural networks had only 8 years to start dring a car. But on the other side, people train the neural networks using the car rules that were proven by human deaths. We shall not forget that it is people who invented the technologies capable of training neural networks. For example, NVIDIA and Tesla have the virtual city system where neural networks of a car autopilot are trained to react to different situations: a child is on the road with a ball to play or 6 people are coming from different sides. What a car should do? Specialists train a neural network to react in different situations: a car with autopilot drives around a child, but suddenly there is a woman with a stroller. What will be the reaction of the autopilot vehicle? If it hits a woman with a stroller it will be a human that will mark this as wrong and propose to the network to choose other options. A neural network remembers the right answer and saves it at its base for the next time to make the right choice. When the network will be trained to react to all possible situations, it will make the right choice. And consequently, in future, the neural network will accumulate its base till the moment it will be 90% more effective than humans. The neural network has advantages in comparison with a human: it could not be tired and it could not fall asleep. Taking this into account alongside the possible shortcomings to be repaired, it could overperform the human and we could see it statistically (but now humans are more effective, but it is a point of time). Plane autopilots are trained in critical situations. However the pilots are needed to take off and land, so some piece of work, for now, is for people. Neural networks have been developed since the 1950s, but at that time it was just a few options, and now due to the extreme speed at which information is produced and processed (in seconds) the number of neural networks has grown by 1000 times in the last 8-10 years. So neural networks progress very fast now and they could be trained in the future fastly, as some time ago neural networks were slower than humans and now they are faster. For the last 10 years, the production of information has burst out. For now, there is no AI as intelligence, only neural networks. For now, humankind could not produce AI and could not do it in the near future. But in our generation, we could see some basics of AI, and maybe some robots. Behind any neural network, there are people, that train neural networks and create content that could be used by neural networks.
OPCFW_CODE
Renaming Board Game Genres In my first thread, I was explaining the difference between "Eurogame" and "Ameritrash". I still cling to these terms since I still find they are useful (at least for me) in helping me sort out the games I like from the games I don't. However, there are quite a few people who felt both labels were equally stupid, so here's my proposal: why not use the terms "Story-driven", "Theme-heavy", "Conflict-driven", "Brain-burner", "Deep but conflict-light", "Solitaire", "Co-operative", etc. instead? They're much more descriptive and less misleading than "Eurogame" and "Ameritrash". On BGG, Euro and Ameritrash are frequently used, and are often presented in opposition to one another. In some social circles, preferences for one over the other can become a form of (sub) cultural identification, in the same way that people attach themselves to a particular role-playing system or genre of music. This is by no means an all-inclusive list, but when I talk to people who are into board games, describing a game in these terms is the fastest way to convey what the game is about, as opposed to using now-dated terms like "Euro" or "Ameritrash." Additionally, you can mix and match terms to better describe a game. What genre a game is in doesn't matter. How you play the game does. I like MegaMan! Ok, so what? If all games are either Euro or Ameritrash and you refuse to play Euro games, then doesn't that force you to only play Ameritrash games? Also, there are a lot of "Euro" games that are made in Americans. The highest prestige a game can garner is the Spiel de Jahres, which is rewarded in Europe, but American made games can win it. If someone were to just come up to me and say, "wanna play a game?", of course I would care about what kinds of games the other guy wants to play. That's how gaming groups are formed, and I certainly wouldn't want to be stuck playing some cube-pusher with a bunch of Eurogamers. As for being precise, it's true. Language is not precise all the time. But when you debate/argue among intelligent adults, as we do here, we ONLY use perfectly precise language. Your language is so slapdash we don't even know what you are trying to say. What is the point you are trying to make? Are you even trying to make a point? Puerto Rico, Caylus, Terra Mystica, Dominion, Settlers of Catan, and Agricola are different, but they share so many common features such as having few to no opportunities to directly attack your opponents, racing for VPs by building efficient economic engines, and emphasizing mechanics more than theme. I seriously don't see how Titan would be lumped into the "Eurogame" category anytime soon. I have also bought several games based on recommendations from people here on the forum. I generally don't care for long 4x games, but I really dig Eclipse if I can set aside the time for it. I think the time has come around to describe a game or boil it down to it's mechanics rather than whether it is Euro or Ameritrash, as I said before, many really good games have originated in the US. Ameritrash is basically a term for big box store shelved games made by companies like Milton Bradley and Parker Brothers and Hasbro. Have any of these companies created and sold "good" games? Absolutely they have, but they also put out what most serious board game enthusiasts refer to as bad games. I grew up playing Sorry!, Monopoly, Life, etc., and I enjoyed playing them with friends and family, but later on, I discovered that there was more to the game world than what I could find at Walmart. Since then, I play games that most people haven't heard of because even if they walk into a hobby game store, they are still looking at stuff like Apples to Apples or the newest version of Monopoly. When I describe the games I play to my non gamer friends and acquaintances, I explain to them that I play games that usually require more skill than luck, and often explain to them why some of the main stream games are actually bad games or non-games (like Sorry!). The terms Ameritrash and Euro to describe games are used by people to typically play the "Euro" games. People who play the mainstream games like Monopoly call them board games.
OPCFW_CODE
Originally Posted by Edyl Personally, I believe that the moderators for the battlegrounds threads have seriously overstepped their bounds on multiple occasions. This forum is supposed to be an area for free discussion between people who have common interests. The moderators of this forum have placed stringent restrictions that have directly combated and hindered this purpose. They have stunted the growth of free thought with totalitarian and dogmatic principles. The personal preferences and views of a few should not be enforced onto what should be a forum for free discussion. On multiple occasions I have seen moderators threaten administrative action in order to stop people from "wanking" and from being "ridiculous" for having an opposing viewpoint. However, things such as this should never happen. The subjective opinion of a few or of one person should not be enough to justify what is and what is not ridiculous, and even if one does deem something to be ridiculous, administrative action should not be considered in such instances. No one person is capable of determining with absolute certainty what is and what is not right. No one person is omnipotent enough to know that their ideas are absolutely correct. The level of certainty in this forum's moderator's actions and their willingness to use administrative action to enforce inherently subjective viewpoints is extremely abusive and limits the freedom of the forum. I've seen instances where on one page, a moderator takes a side and considers the other side to be "wanking," "trolling," and "immature" only to find on the next page another moderator arguing the other side (I've witnessed this twice already and there are likely more seeing as I'm relatively new here). I've even witnessed an instance where a moderator deemed a poster's debate to be ridiculous and decided to change the debate in the middle of the thread. I've even seen moderators lock sections after getting into heated 1v1 debates after claiming that the other person is just immature and nonsensical. And honestly, that debate wasn't even as heated or even remotely ridiculous in comparison to other debates that go on. In addition to this, it is this type of stringent structure that can be detrimental to the growth of the forum. I have no statistical information, but I do know that some people dislike the authoritarian and borderline megalomaniac tendencies of some of the moderators. That attitude just drives people away. Again, I do not have any statistics on how many, but I do know that it happens. As DL said at the top of the page, there will always be "ridiculous" match ups. There is no way to stop those. As long as you have an open and public forum, there is no good way to stop them. If a topic was banned whenever a ridiculous match up was made under the topic, basically everything would have to be banned. Having such a sweeping moratorium just sends the message to the people of the forum that the moderators are authoritarian. It honestly kills the mood of the environment. In addition to this, Edo threads are debatable. There isn't a reason to ban them completely. As we've seen in the manga and anime, Edo opponents are over powered. They can be defeated. Edo threads are just like any other type of thread where "ridiculous" and "normal" threads can be made. I hope that this viewpoint on the entire situation is taken into consideration. The forum is supposed to be fun, and frankly this is "anti-fun". All people do with Edo Tensei threads is say they have unlimited chakra and an infinite chakra pool. and moan about how most Edo's can win because they have special abilities that clearly cannot be overcome without some sort of epiphany. Most people mod's fight with are because the majority of people in PoP's cult are idiotic gorillas that pull stuff of there ass whenever they see fit. A single characters overpowered Jutsu cannot lift the restrictions of a few.
OPCFW_CODE
Page 1 of 1 Posted: 31 May 2010 14:47 How to convert UTF8 text ? I've tried this following code without succes (because setMode seems doesn't work) : Code: Select all MyString = "Légende des pictos", Str = inputStream_string::new(MyString), I would like to convert "Lé gende des pictos" to "L gende des pictos". Posted: 31 May 2010 15:02 Ok, I've found the solution : Code: Select all MyString = string8::mapFromString("Légende des pictos"), Str = string8::mapToString(MyString,core::codepage(codepageId::utf8)), Posted: 31 May 2010 19:20 I am somewhat mystified: Why do you at all have an "utf-8" string in "utf-16" format? Normally, it is better to do the conversion at the source (i.e. where the string comes into the program). Posted: 1 Jun 2010 6:59 The string come from Gildas vp_web package as I tell about it in the last message of this following post : http://discuss.visual-prolog.com/viewto ... light=utf8 Posted: 1 Jun 2010 9:50 seems to access the source , so it should use the relevant code page: Code: Select all getURLContentAsText: (string URL) -> string procedure (i). getURLContentAsText(URL)= Res :- Bin2 = getURLContentAsBin(URL), Text = uncheckedConvert(string8,Bin2), Res = string8::mapToString(Text, ,core::codepage(codepageId::utf8)). Posted: 1 Jun 2010 10:00 But are we sure that theese HTML pages are ALWAYS in UTF-8 ? Posted: 1 Jun 2010 10:15 No, unfortunately not. Normally, the encoding is written in a meta tag near the top of the page: This forum wrote:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> And in fact, the text can even be in utf-16 (i.e. 16bit Unicode) in which case you should not even call string8::mapToString So doing thigs correctly in the general case is quite complex. Posted: 1 Jun 2010 19:54 Hi Tonton Luc & Thomas, 1. Finding the coding page of a web page is not an easy task. Most of the time, there is no tag for that (or not reliable). There are some code page sniffers that try to find the code given some n-gram statistics (ask google) and it is possible to use them within Visual Prolog. Detecting the language is sometimes a usable hint to find an acceptable code (problem arises for pages including multiple languages of course). 2. Well, you should not use vp_web - or at least take it too seriously. I made some libraries available to the forum's fellow thinking that it may be inspiring or give some ideas for developments. Some extensions are 'first try' implementations - and this is (especially) the case for vp_web. This started for me as a challenge ('is it possible to download a web page with Visual Prolog ? let's try). Actually, I suggest you use the API made by Jan 'vipcurl' which is a great piece of software (again, thank you Jan !). I am using it to build web crawlers in Visual Prolog and it is far more customizable than vp_web (which was just an experiment) and reliable. This doesn't solve the detection of code page, anyway. 3. I am currently rewritting many of the tools I made available for Visual Prolog 7.3. I am recrafting the extensions using the generic system introduced in 7.3 so that the solutions are much more elegant and simply reusable (or so I think and hope). Playing with VP7 is easy (or so it seems) but mastering it is another thing - some horse power is hidden in the template and genetic type mechanism. Many extensions made for VP < 7.2 will be deleted from arsaniit.com and replaced by 7.3 new versions. Some of these extensions are now deprecated (because of the support provided by the new PFC). For instance, the support of winsock2 eases the developpement of sockets so I can discard the .lib included in my simple-client-server; some of the API won't work with 64 bits Os (I doubt for instance that it is possible to use a service compile in 32 bits in a 64 bits OS) - So I really hope the Visual Prolog next big move will be a 64 bits version... Posted: 1 Jun 2010 20:18 yes the metatag is one way to specify the codepage on the server side. Another way is in a "header" tag of HTTP protocol. VPcURL can give you the received header-tags. I have no idea which of the two takes precedence if both are present, perhaps that is not allowed. However, that could be found out. Curl also allows for socket programming but I have not gotten around to implementing that in VPcURL. Posted: 1 Jun 2010 20:54 I believe that all (major) WEB servers read the meta tag in the HTML file and send a corresponding HTTP header, i.e. when it deals with static HTML files. CGI and ISAPI extenstions (including scripting engines) are however them selves responsible responsible for sending the correct HTTP headers (as I recall it). Posted: 1 Jun 2010 21:54 You would be surprise how many web pages lack the meta with code page... IE tries to guess the code page and uses MLang (see MLang.dll) http://msdn.microsoft.com/en-us/library ... 85%29.aspx see http://msdn.microsoft.com/en-us/library ... 85%29.aspx and especially DetectInputCodePage. Posted: 2 Jun 2010 7:37 In principle pages that don't have a meta tag should be encoded in 7-bit ASCII. Such pages can still have spacial characters because they can use the syntax 艁 (= 艁). But I do not doubt that many pages are simply erroneous and that WEB browsers tries to guess a lot of stuff to overcome such bugs in a gentle way.
OPCFW_CODE
This is a high level article on the UAV operations. If you want more background on how I load and store the data, head here. Reports in a number of portals including Times of Malta show that an unmanned aerial vehicle has been deployed from Luqa airport. The timing, right ahead of the typically busy summer migration time, seems to be an indication of what the main mission probably is. Frontex has been thinking of operating a UAV for over a decade now, and it seems to have come in the form of an IAI Heron. “Drone” is a bit of a misnomer for this type of system, since in most people’s heads it’s associated to consumer grade toys like the DJI Phantom. The Heron is slightly larger than most small single engine aircraft, with slender and 16 metre long high aspect ratio wings, a 430kg fuel capacity, an engine as powerful as an average family car and needing around half a kilometer of runway for takeoff and landing. From photos taken by local spotters, the Heron seems to be carrying Israel Aircraft Industries’ own native M19 sensor payload giving the aircraft the ability to track boats even in the dark or through clouds. On paper, the aircraft is almost tailor made for border surveillance applications of the type Frontex engages in, with the ability to loiter at the periphery for close to 50 hours at 30,000 feet. IAI quotes the range as 350 kilometers when in radio communication mode with a ground station and 1,000 kilometers when a satellite is used to relay the signal: The subject as to who does the actual “operating” isn’t so clear however. This press release by Airbus seems to suggest that they set up the ground infrastructure and fly the aircraft, but the Heron has been given a standard AFM Air Wing ASXXXX registration, with the first two digits indicating the year of registry (2021) and the last two being the cumulative count of the aircraft (23rd aircraft operated by the Air Wing). In any case, as soon as the news broke, I began looking into ways to track and visualize how Frontex is using their latest asset. As mentioned in the methods post, all data is gathered from the OpenSky Network ADS-B logs. To get a track of all recorded flights, I simply query my SQLite database (again, head to the method’s post for a breakdown of how I did that, if that’s your cup of tea) using giscoR is a fun library for EU state mapping, which in this case I use to get a Malta spatial object to add to my plot, before overlaying the flight tracks of the UAV. library(DBI) library(tidyverse) library(giscoR) library(lubridate) library(scales) WD <- "C:/Users/Charles Mercieca/Documents/RProjects/Heron Tracker/Flights" mydb <- dbConnect(RSQLite::SQLite(), paste0(WD, "/heron-flights.sqlite")) tracks <- dbGetQuery(mydb, 'Select * FROM tracks') %>% mutate(flight = date(as_datetime(flight))) mt <- gisco_get_countries(resolution = "01", country = "MLT") ggplot() + geom_sf(data = mt)+ geom_path(data = tracks, aes(y=Lat, x = Lon, col = factor(flight)))+ labs(col = "Day") While they are a phenomenal source, the fact that they are not a for-profit mean they don’t have as much coverage as say FlightRadar24, particularly off the North African coast. This is evident in the tracks for the 6th and 7th of May, where coverage stops close to 34.6N. On both these cases, the aircraft spent most of the night flying over the southern Mediterranean close to the Libyan coast. Track wise it was also interesting to see the operators slowly expand their safety envelope: for their first flights they lauched the aircraft to the south west and recovered in the opposite direction to the north east: this area over Dingli is largely devoid of any population. But after a few flights, they seemed comfortable to take off from runway 05, over Marsa and Valletta, overfly a bit of Gozo and shoot an approach for runway 13, flying overland most of Malta. As part of the ADS-B message, we also get the aircraft’s barometric altitude, which shows us that the Heron hasn’t been flown over 8,000 feet yet, at least while we were intercepting ADS-B logs. It’s typical operating altitude is closer to around 4,000 to 5,000 feet. ggplot(tracks, aes(x = as_datetime(time), y = Alt, col = factor(flight)))+ geom_point()+ facet_wrap(~flight, scales = "free_x")+ theme_bw()+ labs(col = "Day")+ theme(legend.position = "none")+ ylab("Altitude (feet above Sea Level)")+ xlab("")+ scale_y_continuous(label=comma) While speed is part of the ADS-B payload, I suspect OpenSky Network truncate it to save space. Nevertheless, we can infer ground speed by calculating the distance between the current latitude/longitude coordinates and the lagged coordinates by 1 step like this: mutate(Lat2 = lag(Lat), Lon2 = lag(Lon), t = difftime(time, lag(time), units = "hours")) %>% rowwise() %>% mutate(d = geosphere::distHaversine(p1 = c(Lat, Lon), p2 = c(Lat2, Lon2))*0.000539957) %>% ungroup() %>% mutate(knots = d / as.numeric(t)) distHaversine function returns the great circle distance between the two pairs in metres, which I then convert to nautical miles by multiplying by 0.000539957. Dividing this distance d by the fraction of hours elapsed over the two readings t gives us an inferred ground speed in nautical miles per hour, or knots. tracks %>% group_by(flight) %>% mutate(flightTime = cumsum(replace_na(t, 0))) %>% ggplot(aes(x = flightTime, y = knots, col = factor(flight)))+ geom_point()+ ylim(c(0,200))+ facet_wrap(~flight, scales = "free_x")+ geom_smooth(method = "loess", se=F, col = "black")+ theme_bw()+ ylab("Inferred Ground Speed (knots)")+ xlab("Flight Time (hours)")+ theme(legend.position = "none") ## `geom_smooth()` using formula 'y ~ x' ## Warning: Removed 31 rows containing non-finite values (stat_smooth). ## Warning: Removed 29 rows containing missing values (geom_point). For the most part, this approach works well, with an estimated ground speed in the region of 70-100 knots being what you would expect. Exceptionally high values (150+ knots) are mainly due to a quirk of this method in that it measures the shortest possible distance between two points. Over the two recorded time points, the aircraft might have of zig-zagged, circled or flew in a curve. Lastly, we can compute a few quick summary statistics. We have the aircraft on record as flying a total of 1,600+ nautical miles tracks %>% select(d) %>% sum(na.rm = T) ## 1689.674 tracks %>% select(t) %>% sum(na.rm = T) ## 20.18278 over 20 hours. The ADS-B logs also help tell us a bit about the aircraft origins. The mature serial number (571) hints that the aircraft was manufactured recently, with a demonstrator at a 2012 airshow having a 268 serial. But more pertinently, the transponder’s ICAO 24-bit address (A11111) shows two test flights in late March 2021 from Ein Shemer airfield, about 40km south of Haifa, which seems to be where IAI operates the Heron program from. The one month hiatus sounds about right for then disassembling the aircraft into a container, trucking it to Haifa, transporting it by sea and reassembling it. The proof of the system’s utility will be in how much more life can be saved in the southern Mediterranean this summer.
OPCFW_CODE
I'm having issues where around 30 minutes after bootup, programs will begin to systematically freeze one after another until nothing is responsive except for the mouse. Start menu, task manager, nothing is responsive yet I can still move the cursor. I've tried system restore and uninstalling a new catalyst driver for my AMD video card to no avail. boot into the bios and sit there...make sure the power supply and cpu temps are fine. if they look fine make sure the mb bios is up todate. chekc the part# of the ram to see if it on the mb qal list or was tested by the ram vendor in your mb. run hardware monitor and cpu-z make sure ram running at it rated sped and the mb reading right. with hardware monitor watch the cpu/gpu temps and see if the mb power supply is holding. try booting into save mode use msconfig and turn every thing off in start up to see if it spyware or a virus. try running malware bytes or combofix in safe mode. on power up with new mb hit f8 key or f key for the boot menu and select usb stick. when i made my hirem boot usb stick yesterday i had to unzip the hirem download first then run the hirem special boot creator icon. you select hirem iso and then tell it to install the hirem files on your usb stick. it take 10 min or so for the program to install all the files. lots of software faults reported. I would do the following. Update your usb 3.0 asmedia host controller driver. update your intel chipset drivers. ***disable all low power transitions in power management. Set everything to high performance*** you will want to do this first to make your system stable. I think your drives go idle and try to go to low power and the logic in the drives are incorrect. check for updates to your SSD (firmware) I would suggest that you are overloading your USB host controllers. looks like you have a p67 chipset and lots of USB hard drives devices. oh, your bluetooth driver was having issues also. you might want to disable it for now. clear all your logs and see how your system works after some of the changes. if you still have delays, I would start removing the usb drives. (lot of issues with them) I think I figured out the problem. Apparently when Crucial M4 SSDs reach 5184 hours they begin to fail every hour or so. A firmware update is supposed to fix this problem and so far I've gone two hours without the issue. cool, i did not want to mention that. my last firmware update bricked my drive and I just got the RMA back a few days ago. (I was getting 30 second delays from the drive, caused a lot of strange problems)
OPCFW_CODE
Find radius of circle in the following circle. Let we have two tangent lines from point $A$ to circle. Find radius of circle in the following circle: I draw two radiuses of circle to the two tangent lines from point $A$ to circle so we have two triangles with angels $90$ but i don't know how we can use other information from question. Let $c=AB$, $b=AC$ and $a=BC$ be tangent to the circle with $O$, $D$, $E$ and $F$, respectively then $OD=OE=OF=r_a$ $\text{area of} \triangle ABC=\text{area of} \triangle AOB + \text{area of} \triangle AOC - \text{area of} \triangle BOC $ By Heron's formula we can compute the area of $\triangle ABC = \sqrt{u(u-a)(u-b)(u-c)}= \frac{b+c-a}{2}\times r_a = r_a (u-a)$ where $a=7,b=8,c=9$ and $2u=a+b+c$ $\sqrt{12(12-7)(12-8)(12-9)}=r_a (12-7)$ $r_a=\frac{12\sqrt5}{5}$ A reference in order to understand your formulas : page 153 here How can I draw this picture? In python? Let $O$ be the center of the circle, the points $D$ and $E$ be the intersection points of the lines $AC$ and $AB$ with the circle, respectively, point $F$ the intersection between $AO$ and $DE$, and point $P$ the intersection between $BC$ and the circle. We have $PC = CD$ and $PB = BE$, since these are tangent segments drawn from the same point; so, since $PC + PB = 7$, then $CD + BE = 7$. But we also know that $CD + 8 = BE + 9$, so $CD = 4$, $BE = 3$ and $AD = AE = 12$. By the law os cosines, in $\triangle ABC$, we obtain $7^2 = 8^2 + 9^2 - 2\cdot 8 \cdot 9 \cdot \cos{B\hat{A}C}$, thus $\cos{B\hat{A}C} = \dfrac{2}{3}$. Then, in $\triangle AED$, we have $ED^2 = 288 - 2\cdot 144 \cdot \dfrac{2}{3}$ by the law of cosines, and $ED = 4\sqrt{6}$. Applying the pythagorean theorem in $\triangle AFD$: $\left (\dfrac{4\sqrt{6}}{2}\right)^2 + AF^2 = 144$ $24 + AF^2 = 144$ $AF = 2\sqrt{30}$ Now, note that $\triangle AFD \sim \triangle AOD$, so $\dfrac{12}{OD} = \dfrac{2\sqrt{30}}{2\sqrt{6}}$, and hence $OD = \dfrac{12\sqrt{5}}{5}$. Why $CD+BE=7$ ? @amirbahadory Think if P was the intersection point between BC and the circle. Then PC = CD and PB = BE, just like AD = AE. The tangent lines are drawn from the same point. @amirbahadory I edited my answer to make it more clear. Extend sides AB and AC. Draw bisector of $\angle BAC$. The radii of inscribed circle and the one it's radius to be found are on this line. Draw these circles. Draw a tangent on bigger circle parallel with BC.Name it's intersection with extension of AB as D and with extension of AC as E. Triangles ABC and ADE are similar, so the radii of circles is proportional to the altitudes of triangles and we may write: $\frac {r_2}{r_1}=\frac {h_2}{h_1}$ Where $r_1$ is incircle radius, $r_2$ is required radius, $ h_1$ is the altitude of triangle ABC and $h_2$is the altitude of triangle ADE, we have: $h_2=h_1+2r_1$ In triangle ABC we can find half perimeter as $p=(9+8+7)/2=12$ and by Heron's formula the are is $s=12\sqrt 5$ and $h_1=2\frac s{BC}$. Also we have : $r_1=\frac s p=\frac{12\sqrt5}{12}=\sqrt 5$ $\frac {r_2}{r_1}=\frac{h_2}{h_1}\Rightarrow \frac{h_1+2r_2}{h_1}=\frac{r_2}{r_1}$ which finally gives: $$r_2=\frac{r_1h_1}{h_1-2r_1}$$ $h_1=2\frac{12\sqrt5}7=\frac {24\sqrt 5}7$ putting values we finally get: $$r_2=\frac{\sqrt5 \times 24\sqrt5}{7(\frac{24\sqrt 5}7-2\sqrt 5)}=\frac{12\sqrt 5}5$$
STACK_EXCHANGE
08-12-2002, 02:05 AM What is more efficient in the overall size of d/b haveing all the fields in one table or creating seperate tables for various sections of sie Content - id - users - name-age - address - guestbook - message - date -news - hit - url - etc etc content - id - news users - id - name - age - address gusetbook id - message -date 08-12-2002, 04:39 PM personally, I think it is faster to have all of the fields in the same table. As long as you only select the fields you want to display and not just select * the db should run efficiently. Making connections to various tables adds time to loading dynamic content. Database design is just as difficult as important. With a good design, you can have fast access to the required data, without hugh updatingproblems (updating, insertion or deletion anomaly's) Your design will depend on what data you need to insert/select/update in each connection, and on the kind of data (is it updated frequently, do you reuse a lot of data (over time, over records), ...) Also, if you need to be able to track the changes in your database (who made them? when?), it's in most case best to spread the data over a few tables. With the data from your example, I don't think it will be moderated frequently so you might as well put it in one table 08-14-2002, 01:40 AM What raf said. You're probably better off studying relational database design first, and then worrying about efficiency. :D 08-14-2002, 10:06 PM In relational databases, there is a correct way of designing tables. It is through a process called "normalization". A fully normalized database has no duplicate information and will be smaller than one that isn't normalized. Often times in a web environment, a fully normalized design may not be desirable, especially when efficiency is a factor and you are joining large amounts of records. According to Microsoft, there are two different ways you can use a database, 1) Data Warehousing - where you collect information and give reports. If you this is how you are using the database, you may want to leave it unnormalized. 2) Online Analytical Processing - where you are constantly doing updates, inserts, deletes, etc. (like a forum :) ) you want to have it as normalized as possible. If you have access to the MS-SQL Server CD, see the Microsoft documentation for Database Design Considerations. "people tend to hate me cause I never smile as I ransack their home, they want to shake my hand" -The Who The Seeker
OPCFW_CODE