Document
stringlengths
395
24.5k
Source
stringclasses
6 values
News and events 2020 #1 Lund University Humanities Lab At the beginning of the new semester, we would like to provide all users, new and old, with updated information regarding our guidelines and routines. What have you been up to in the Lab? Tell us about your fun findings, experiments and if some of it got all the way to a publication! For our Annual report for 2019, we would like to know more about your projects, small or big, and perhaps highlight some of it in the documentation of activities we do every year. All previous Annual Reports can be found here. Please send a short paragraph to Deputy Director Victoria Johansson. An approved project application is a prerequisite for all users who wish to use the rooms and equipment in the Humanities Lab. Even if you are a previous user, you need to make a new application for every project. Instructions for the project application procedure can be found here. Use the navigation in the left column of the page to find information about user guides, the booking calendars, our participant pool, and server and data management. This is also where you find a questionnaire to fill out when your project is completed. Remember that you, according to the application procedure, must contact relevant staff members before making the project application. What is a project? The projects in the Lab are of different nature, but the rule is that the Lab manages users and activities through projects. A project can run for several years, or just consist of one single recording in the LARM studio. Thus, Lab projects can be concerned with everything from a long-term experimental data collection to using software for video editing. A project can also consist of lengthy or consecutive consultations with our experts. Also, note that for a student project (on BA or MA level), the principal investigator always has to be one of the responsible supervisor. If an ongoing project needs to be prolonged, or updated with new members etc., this is done by contacting Martina Holmgren. Questions about project management can be directed to the lab manager. The lab has studios and experimental facilities at SOL (Centre for Languages and Literature) and LUX. Registered users in the lab will have access to relevant lab facilities added to their LU cards. Make sure that you are up to date with the user guidelines and code of conduct. Information for all users in the lab regarding maintenance of facilities, equipment, ethics, etc. are available here. We offer research preparatory courses, group tutorials and written guides for much of the equipment and software available in the lab. See webpage for more information. If you have requests about new courses, tutorials or manuals, contact pedagogisk_utvecklarehumlab.luse. Activities for spring 2020 include: During spring semester, the Humanities Lab offers the following three courses: - Functional and structural brain imaging; application to Johan Mårtensson; starting date: TBA - Programming for the Behavioral Sciences; application Marcus Nyström; starting date: March 31 - Statistics II; application to Joost van de Weijer; starting date: TBA See course descriptions here. The Lab will also organize the following group tutorials. To register send a mail to the person indicated in brackets. See group tutorial description here. - Audio & Video recording at LARM studio (Peter Roslund) ON DEMAND - Introduction to the Lab (Maria Graziano) ON DEMAND - Elan II (Maria Graziano); March 23, 10-12, B054 - BioPac system (Joost van de Weijer); April 1, 13-15, Lab - Motion Capture (MoCap) system (Henrik Garde); May - GPS and data georeferencing (Giacomo Landeschi); Feb 25, 9-12, MoCap studio - Image-based modelling (Giacomo Landeschi); May 6, 9-12, MoCap studio - Mining text with quantitative methods (Johan Frid); March 2, 10-12, B054 - PsychoPy (Marcus Nystrom); Feb 12, 13-15, L123 - R: non-statistical uses (Joost van de Weijer); April 3, 10-12 Go to our website to find important user information and an email address for any questions you might have related to the equipment. The equipment in the storage rooms is only available to registered users. Make a note in the booking binder of which equipment you are borrowing and for how long you plan to use it. All equipment is insured. All participants are insured by a group insurance. We communicate with our users through an email list. If you still receive emails after completion of the project and do not want to remain on the list, contact martina.holmgrenht.luse. IMPORTANT: Please note that it is your responsibility to guide your participants if the fire alarm starts. Make sure you know how to evacuate the premises. Policy documents for the lab Policy documents that are relevant to users in the lab (e.g. code of conduct, research ethics, possible cost etc.) can be found here. IMPORTANT: Please follow your participants in and out of the lab (all the way to the outer doors). This is especially important if the library is closed and if that is the case – please walk your participants through the side entrance to the lab, NOT through the library main doors. The project manager is responsible for thanking Humanities Lab at any publication or other scientific output with the following formulation: “The author(s) gratefully acknowledge(s) Lund University Humanities Lab” and for tagging any publication or output in the LUCRIS database with the Humanities Lab under the tab Infrastructure. Personal data management For project management purposes, we store the following information: - name of all project members - email addresses to all project members - department / affiliation - name of the project - project duration These tasks are linked to a project ID for access to space on the lab server. The data is available only for the lab administration and is used a) to provide access to the lab facilities for users, and b) for statistical purposes. The data is stored digitally during the lifetime of the project and for another 12 months after the end date of the project. The data is also archived in physical form and will be deleted according to the archiving regulations of the university.
OPCFW_CODE
Disable clang-tidy for clangd, but not effective Version confirmation [X] Confirm Following prerequisites [X] Confirm Neovim version NVIM v0.8.2 Operating system/version macos and ubuntu Terminal name/version iTerm $TERM environment variable No response Branch info main (Default/Latest) Fetch Preferences SSH (use_ssh = true) Affected language servers clangd How to reproduce the issue modify the file: lua/modules/configs/completion/servers/clangd.lua as the below: cmd = { "clangd", "--background-index", "--pch-storage=memory", -- You MUST set this arg ↓ to your c/cpp compiler location (if not included)! "--query-driver=" .. get_binary_path_list({ "clang++", "clang", "gcc", "g++" }), "--clang-tidy=false", "--all-scopes-completion", "--completion-style=detailed", "--header-insertion-decorators", "--header-insertion=iwyu", }, Actual behavior diagnostic still not disappear; Expected behavior No response Support info open a cpp file Logs No response Additional information No response can you send the output of lua print(vim.inspect(vim.diagnostic.config())) { ["dim/unused"] = true, float = true, severity_sort = false, signs = true, underline = true, update_in_insert = false, virtual_text = true },,I also need to make it disable Might also have to disable in null-ls https://github.com/ayamir/nvimdots/blob/ed54fd687d7347168b887b8b420d42f520f8b5d6/lua/modules/configs/completion/formatters/clang_format.lua In this case null-ls is probably getting priority here. You can see running lsp client with LspInfo. What should I do? comment this line. https://github.com/ayamir/nvimdots/blob/ed54fd687d7347168b887b8b420d42f520f8b5d6/lua/core/settings.lua#L76 How should I specify the c++ standard of clangd in that file? comment this line. https://github.com/ayamir/nvimdots/blob/ed54fd687d7347168b887b8b420d42f520f8b5d6/lua/core/settings.lua#L76 After I set it up, it didn't work. Linters still reports errors after u comment clang_format? Would u mind share the screenshots? I would lieke to see the format of the diagnostics. { ["dim/unused"] = true, float = true, severity_sort = false, signs = true, underline = true, update_in_insert = false, virtual_text = true },,I also need to make it disable Open a new issue, don't off-topic. @macrosea You can directly remove --clang-tidy and see the result. OK, but , I also want to achieve it ,comment "--clang-tidy" or remove it ,no problem been solved. Linters still reports errors after u comment clang_format? Would u mind share the screenshots? I would lieke to see the format of the diagnostics. <img width="263" alt="Pasted Graphic 3" src="https://user-images.githubusercontent.com/116 857974/226242901-9f273f77-cc0c-403a-93d4-26dd27d26e53.png"> Ok, so from the diagnostics, those errors are from lsp and but not null-ls. I don't think u need to change anything related to null-ls ATM. Ok, so from the diagnostics, those errors are from lsp and but not null-ls. I don't think u need to change anything related to null-ls ATM. Thank you very much. You can write a .clang-tidy file under your project root and fill it with below which will ensure disable all of the clang-tidy's check. More info refer to this: https://clang.llvm.org/extra/clang-tidy/ Checks: '-*' AFAIK This is an upstream issue that might be fixed (perhaps?) in the next release. Current workaround is to create a new config.yaml in the following directory if such file doesn't exist (this is known as a global configuration file): macOS: ~/Library/Preferences/clangd/ Other *nix systems: $XDG_CONFIG_HOME/clangd/, if $XDG_CONFIG_HOME is undefined then use ~/.config/clangd/ Then add the following content to that file: Diagnostics: ClangTidy: Remove: ["*"] How should I specify the c++ standard of clangd in that file? Append to config.yaml: CompileFlags: Add: [-std=c++??] # or gnu++?? AFAIK This is an upstream issue that might be fixed (perhaps?) in the next release. Current workaround is to create a new config.yaml in the following directory if such file doesn't exist (this is known as a global configuration file): macOS: ~/Library/Preferences/clangd/ Other *nix systems: $XDG_CONFIG_HOME/clangd/, if $XDG_CONFIG_HOME is undefined then use ~/.config/clangd/ Then add the following content to that file: Diagnostics: ClangTidy: Remove: ["*"] How should I specify the c++ standard of clangd in that file? Append to config.yaml: CompileFlags: Add: [-std=c++??] # or gnu++?? cc @rileychc @macrosea Closed due to inactivity.
GITHUB_ARCHIVE
The application works badly when you compile it I was talking to the flutter developers in this thread since the problem happened in the compilation phase and not in the development phase, I took their problem for granted, but thanks to the comments of another user we managed to find the fault in the resposive_sizer package and in the other called sizer, there You can see details of the user who commented on it and how he fixes the problem, maybe this can help you to deduce where the problem is in your package. Anyway, it is clear that the problem is here, this is not critical, I like your package and if you remind me, I already tried to help you previously here, I will only mention it so that you can try to fix it. Anyway tell you that I simply generate this class based on your package that worked well for me and in this way I only use the mediaquery but in an easier way. sizer.dart import 'package:flutter/widgets.dart'; /// Gets the screen type between mobile and tablet. enum ScreenType { mobile, tablet } class Sizer { static double h(BuildContext context) { return MediaQuery.of(context).size.height; } static double w(BuildContext context) { return MediaQuery.of(context).size.width; } static Orientation orientation(BuildContext context) { return (h(context) > w(context)) ? Orientation.portrait : Orientation.landscape; } static bool isPortrait(BuildContext context) { return (orientation(context) == Orientation.portrait) ? true : false; } static bool isLandscape(BuildContext context) { return (orientation(context) == Orientation.portrait) ? false : true; } static ScreenType screenType(BuildContext context) { return ((orientation(context) == Orientation.portrait && w(context) < 600) || (orientation(context) == Orientation.landscape && h(context) < 600)) ? ScreenType.mobile : ScreenType.tablet; } static bool isMobile(BuildContext context) { return (screenType(context) == ScreenType.mobile) ? true : false; } static bool isTablet(BuildContext context) { return (screenType(context) == ScreenType.mobile) ? false : true; } } extension SizerExt on num { /// Calculates the height depending on the device's screen size /// Eg: 20.h -> will take 20% of the screen's height double h(BuildContext context) { return MediaQuery.of(context).size.height * (this / 100); } /// Calculates the width depending on the device's screen size /// Eg: 20.h -> will take 20% of the screen's width double w(BuildContext context) { return MediaQuery.of(context).size.width * (this / 100); } /// Calculates the sp (Scalable Pixel) depending on the device's screen size double sp(BuildContext context) { return this * ((((MediaQuery.of(context).size.height + MediaQuery.of(context).size.width) + (MediaQuery.of(context).devicePixelRatio * MediaQuery.of(context).size.aspectRatio)) / 3) / 3) / 100; } } I divided by 3 the .sp method because I get all the data from the device using mediaquery. Guys I had the same experience with sizer 2.0.15 package, the main screen was not working till i refresh the apps, close all apps in the background & sometimes optimize my phone...but I did a silly experiment based on my observation (Note: I am still a beginner in coding world) that when we add a const modifier to a container or a font size while using this package it will give a direct error... so I removed the const modifier from my return MaterialApp in main.dart & then I made another release bundle & test it... I found my App is working with No issues ... my code now looks like return Sizer( builder: (context, orientation, deviceType) { return MaterialApp( home: MainScreenPage(), ); }, ); Hi @samerjallad, this you have said sounds very interesting, it is strange but, due to the behavior received, we can think that the class that uses the sizer package if it receives all the values it needs but, when it tries to load them, it cannot because as it is constant it cannot change in time of execution. It is the only explanation I can think of (Note: I am also quite a beginner). I will use it like this, it would still be nice to be able to fix it in some way and not have to worry about whether it is constant or not, but without a doubt this is a solution. Thanks friend. Hi @hekhy, I added some changes and it seems to be fixed on my end. Did it fix the issue in your end? Thanks. Hello @CoderUni , sorry for the delay because lately I have not been able to dedicate time to this because of my other job but today I started to review and I simply speak to you today so that you do not close the thread yet so I can do the test and tell you if it is solved. Anyway thanks for being so quick to make the changes. I enter today and tomorrow I will confirm if it works correctly for me. Hello again @CoderUni I have been testing the package today, first I tried it with my application, the first one for which I created the thread, but it has continued to malfunction. Then I thought that maybe it was because of how I have my project set up, maybe my way of working is not good, but I decided to try the flutter counter test project and the same thing happened. I must also say that I have tested the version that is on the pub.dev page, version 3.0.4 + 4 and I also tried directly downloading the code from the issue11fix branch and putting it locally but none worked in any of the projects . I don't know what the problem could be, but at the moment I'm still working with the mediaquery with an extension that it creates for me. Here I leave you a link to my test project that I always use for these things, in which you will see that there is the application that told you about the flutter counter that is generated when creating a project, as the test with the latest version that you have put said in pub.dev and putting the files in the issue11fix branch but none of them worked. If you generate an apk as it is now, when you open the already installed application you will see a gray screen, and if you create an apk but using the package in the pub.dev version you will see that all elements where I have put a size with the .sp method They are not seen, you can try you simply have to uncomment the import line and comment the other one that uses the issue11fix branch locally. I hope all this can help you find the problem or at least have a proof of the error. Maybe it's something with my computer, I don't know if it only happens to me, but anyway, thanks for being so quick in trying to fix the errors. As I always said, it is a very good package, what a pity that now it does not work well for me. A hug friend! @CoderUni will you publish the fixes to the pub package? @CoderUni we are also waiting for the fix. Do you have a ETA? Sorry for the late reply @hekhy @Lorenzobettega and @michael-ottink . I tested the commits in the other branch again and it seems to work. The only caveat is that child of MaterialApp should be wrapped by ResponsiveSizerinstead of wrapping MaterialApp with ResponsiveSizer in the previous versions. It should look something like this now: return MaterialApp( home: ResponsiveSizer( builder: (context, orientation, screenType) { return const HomePage(); }, ), ); Sorry for the late reply @hekhy @Lorenzobettega and @michael-ottink . I tested the commits in the other branch again and it seems to work. The only caveat is that the child of MaterialApp should be wrapped by ResponsiveSizerinstead of wrapping MaterialApp with ResponsiveSizer in the previous versions. It should look something like this now: return MaterialApp( home: ResponsiveSizer( builder: (context, orientation, screenType) { return const HomePage(); }, ), ); I've updated the packaged to version ^3.0.5 which included the recent changes :) Perdón por la respuesta tardía @hekhy @Lorenzobettega y @ michael-ottink . Probé las confirmaciones en la otra rama nuevamente y parece funcionar. La única salvedad es que el elemento secundario de MaterialAppdebe estar envuelto por en ResponsiveSizerlugar de envuelto MaterialAppcon ResponsiveSizeren las versiones anteriores. Debería verse algo como esto ahora: return MaterialApp( home: ResponsiveSizer( builder: (context, orientation, screenType) { return const HomePage(); }, ), ); Actualicé el paquete a la versión ^ 3.0.5 que incluía los cambios recientes :) Thanks @CoderUni that is a key fact since before it was the opposite. I'll try it as soon as I can.
GITHUB_ARCHIVE
I have had this post in a draft for almost a month now. I had planned to include statistics around the amount of data that humans are generating (it is a lot) and how we as are causing some of own problems by having too much data at our fingertips. What I realized is, a lengthy post about information overload is, well, somewhat oxymoronic. If you would like to learn about the theory, check it out. We are absolutely generating more data than could possibly be used. This came to the forefront as I investigated my metrics storage in my Grafana Mimir instance. I got a lot of… data Right now, I’m collecting over 300,000 series worth of data. That means, there are about 300,000 unique streams of data for which I have a data point roughly every 30 seconds. On average, it is taking up 35 GB worth of disk space per month. How many of those do I care about? Well, as of this moment, about 7. I have some alerts to monitor when applications are degraded, when I’m dropping logs, when some system temperatures go to high, and when my Ring Doorbell battery is low. Now, I continue to find alerts to write that are helpful, so I anticipate expanding beyond 7. However, there is almost no way that I am going to have alerts across 300,000 series: I simply do not care about some of this data. And yet, I am storing it, to the tune of about 35 GB worth of data every month. What to do? For my home lab, the answer is relatively easy: I do not care about data outside of 3 months, so I can setup retention rules and clean some of this up. But, in business, retention rules become a question around legal and contractual obligations. In other words, in business, not only are we generating a ton of data, but we can be penalized for not having the data that we generated, or even, not generating the appropriate data, such as audit histories. It is very much a downward spiral: the more we generate, the more we must store, which leads to larger and larger data stores. Where do we go from here? We are overwhelming ourselves with data, and it is arguably causing problems across business, government, and general interpersonal culture. The problem is not getting any better, and there really is not a clear solution. All we can do is attempt to be smart data consumers. So before you take that random Facebook ad as fact, maybe do a little more digging to corroborate. In the age where anyone can be a journalist, everyone has to be a journalist.
OPCFW_CODE
Brother HL 4040 The Brother HL-4040 Brother HL4040: An Brief Overview The Brother HL4040 is a multi-color printer with speeds of 21 ppm (pages per minute). This printer is quickly becoming common in homes and businesses. The Brother HL4040 series starts at roughly $400 and is a great match for small businesses that need large quantities of printing done. However, toner for the Brother HL4040 is quite expensive and will quickly shadow the cost of the printer in the long run. The average cost of a Brother HL4040 TN110 or TN115 toner cartridge will run you nearly $200 to replace the entire set of toner cartridges! Brother HL4040 Series Printers The Brother HL4040 comes in two models: - Brother HL-4040 - Brother HL-4040CN - Brother HL-4040CDN For one, the top tier of this series, the HL-4040CDN allows for duplex AND networking capabilities but otherwise most of the the printers in this series are fairly similar. Brother HL-4040 Toner Cartridges The Brother HL-4040 series printers use two types of cartridges: - TN110 (Standard-Yield) - TN115 (High-Yield) If you opt to use the High-Yield TN115 toner cartridges, you will be able to print and nearly 3,500 additional pages. Smart business owners keep this as a heads up next time you head down to Office Depot to buy some new toner cartridges. So if you have a Brother HL-4040 printer, you already know how much money it costs to keep the darn thing running. Luckily, the after market will take care of you. You can find remanufactured toner cartridges for the HL-4040 at nearly 70% of the OEM price. A lot of people scoff at the after market toner industry but it's come a long way since the simple refill kits of yesteryear. Large companies spend millions of dollars reverse engineering toner to its exact formulation in the hopes to sell to the larger market. If you're looking to save a quick buck, you should definitely check out the after market toner for the Brother HL-4040. But if you REALLY want to save some money, check out the next section... Brother HL-4040 Toner Refill Kits To save the most amount of money on your TN110 or TN115 toner cartridges, you'll want to go with a toner refill kit. A toner refill kit is exactly as it sounds, it's a set of tools (and toner) which allows you to refill your own toner cartridges. I've done this a few times myself and I must say, at times it can be a bit messy but once you know how to do it, it becomes second nature. A refill kit generally contiains: - A hole-making tool The HL-4040 uses a set of gears to regulate toner levels, once it hits zero you'll need to reset these gears by taking the gear box apart. It's not all that hard however, all it takes is realigning a few gears and then you're set. It's a really great DIY project - plus you save money! Brother HL-4040 Printer: In Action! Brother HL-4040: Final Thoughts The office I work at currently uses the Brother HL-4040 as one of the main printers throughout the office. Although we overload it quite often with massive print jobs, it stills holds up really well. The only problem with the HL-4040 is that Brother still needs a lot of work to catch up the the quality that HP can deliver in color machines. But, minute disputes about quality aside, I can say that the HL-4040 is a really nice machine. It's much better than the old days of printing out 1ppm with an old Epson printer - then you had to gear off the side thingies! Hope this overview helped :)
OPCFW_CODE
There are many ways of devising a workflow for development, and they are usually judged on their efficacy in resolving bottlenecks. Naturally, this process is sensitive to what is being developed, and this note outlines one such workflow, which I use when developing with Grav. Certainly, any workflow can benefit from version control to maintain a stream of backups and an easy way of undoing mistaken steps. Version control has been around for years, popularised through Git, Subversion, and Mercurial. In more recent years development environments have also become popular to simplify workflows, especially applying changes locally, sharing them for testing, and finally deploying them to a live server. The workflow suggested is general enough to apply to the organization of pages, themes, and plugins between environments, but could also easily be ported to a different file-based CMS. It implements version control, various environments, and a semi-automatic way of deploying changes between environments. There are various philosophies on how many environments should be used for going from development to production, but common to many methodologies are a local one for development, an external one for testing, and finally a live one for production. Simply put, we want to conduct development and initial testing locally – on our own devices – before anything is put through extensive tests. When we reach a goal or milestone in this development, we want others to test it for errors and inconsistencies – thus we “stage” it in an environment that operates equivalently to the live server. These goals or milestones should, in my opinion, hold some level of significance for reaching the final goal of the project. More pragmatically, for developing with Grav, it entails a feature or a stage of development which warrants more extensive testing than is usually done in local development. For Grav, my basic workflow looks like this: - Testing: Develop templates and styling (on a non-cached Grav installation), using mock pages and content. - Pages: Format and spellcheck actual pages, generate responsive images (non-destructively, through Gulp). - Staging: Test site with actual pages on current templates and styles (cached Grav). - Production: Live, optimized site with actual pages. And this works well enough, but we also want the benefit of version control to keep close track of development, and so unexpected errors on Live can be undone at a moment’s notice. I do this by pushing template- and style-files from Testing, as well as Pages, to Staging, which when they check out are pushed to BitBucket which automatically deploys to the Live server. Local Environment # First of all, I have set up my local environment to accomodate Grav’s Multisite Setup. This is for one single, important reason: I test the site with both mockup (placeholder text and images) and actual content. The first is especially important in theme development to account for varied pages and types of content. In comparison to a regular Grav installation, this requires very little. We only want a few things: Separated pages, potentially separated configurations, and separate caches. In local development I do not generally have cache enabled, and configurations are mostly shared. We achieve this with a setup.php-file in the root of Grav: grav/user/sites/ we need two folders: test.grav.dev. I have these setup in the VirtualHost of my server so that grav.dev uses the mockup content and test.grav.dev the actual content. The config folder holds each site’s system.yaml, which remain mostly untouched. However, I use a shared config/plugins folder for the plugin configurations. This is easily achieved with a mklink /J grav/user/sites/test.grav.dev/config/plugins grav/user/sites/grav.dev/config/plugins This tells Windows that I want a test.grav.dev/config/plugins folder which virtually mirrors the contents of any changes made in the former are a directly available in the Building Pages from Source # At the root of my local pages-folder from a Source-folder, which holds all pages ( and their images. I do this because I use responsive images on the sites I create (load smaller images on smaller screens, etc). This is done through Gulp like this: gulpfile.js. The structure of this root folder ( www) looks like node_modules/: Holds NPM packages pages/:Holds the generated pages, temporarily responsive/: Temporarily holds responsive images Source/: Holds the pages, structured hierarchically as normal pages in Grav output.log: (Optional) Log from the default Gulp task gulpfile.js: The Gulp tasks package.json: The NPM packages to use Now, when the default Gulp task (just gulp in the www directory) is run, a few things happen: - The current pages are deleted. Pages are copied from Responsive images are generated (from pages/) and placed in Responsive images are minimized and copied from responsive-folder is cleaned. During this process, the console will be quite busy spewing out details of what it is doing. Afterwards, running gulp move will delete the current pages in grav/user/sites/test.grav.dev/pages and move the updated ones there. The optional log is generated by running the Gulp task as such: gulp 2>&1 | tee output.log. I www/pages folder intact so that changes to pages during testing can be quickly undone, by just running With this setup, I can regenerate files on the fly, and test the site with both mockup and actual content. Since the two sites share everything except pages, changes to themes or plugins do not need to be manually updated. The test.grav.dev site is essentially the Testing Environment, and holds everything that will later be pushed to the Staging Environment. To deploy Grav you essentially just need to move the user-folder to another server, but this is where we want to add in a layer of version control. In my case I simply commit the relevant files directly to Git when a goal or milestone is reached, and then push this to a remote repository like SourceTree or through Managing .gitignore-files and avoiding conflicts in Git can be a hassle in terms of avoiding superfluous files or having to resolve conflicts down the line, so a good .gitignore-file is handy. Staging Environment # As mentioned, the remote repository automatically deploys from the repository to the Staging server. This server is actually the same server as the Live one, the files are just pushed elsewhere than the Live-domain. Thus, extensive testing – specifically of optimizations to assets like CSS, images, JS – is done here. Since Staging and Live are both on the same server, there is complete consistency in how the site operates. At the end of a deployment the cache is, of course, cleared. I typically use the .htaccess rules to keep this environment available only to testers. Live Environment # As the Live Environment is a virtual replica of the Staging Environment, it only needs to be updated by another deployment. The only difference is that this deployment is manual rather than automatic. The key here is that test reports are reviewed, bugs resolved, and changes made before the Live site is updated. Inconsistencies are not essential in Staging as it operates more as a place for continuous testing and rapid updates, whereas on Live they are inadmissible. The final workflow in regards to Grav looks like this: The workflow described advocates three environments: Testing, Staging, and Live. Implicitly, it also favors a centralized location for code – the remote repository which the Staging and Live Environments mirror. The benefit of this is that code generally improves from extensive testing and code review from other users, but more importantly that code is forcefully kept up-to-date rather than developed in a decentralized fashion – leading to fewer failed “builds”. Also, the server used for Staging perfectly represents the conditions of Live so migration is painless as long as the master branch is kept clean. Further, the Testing Environment allows for a variety of content so that the most harrowing of errors in style, structure or functionality are ironed out before this extensive testing. Finally, the focus on limiting manual tasks such as copying files speeds up the development process and allows semi-automatic deployments with integrated version control. The process could potentially benefit from continuous integration and deployment – a system wherein changes were not pushed if SASS, JS, PHP, or Twig returned a fatal error – but most of those (apart from PHP) are usually discovered in local development. Do you have any questions about this approach, or feedback on it? Contact me.
OPCFW_CODE
DELETE query in Postgres hangs indefinitely Currently I am trying to delete a row(s) in a particular database table api_user. Yet deleting hangs for seemingly infinite time ( currently been running for 1800 seconds as I've been looking for answers ). The row in question had foreign key dependents, yet all of those dependents were deleted already, thats been verified. I'm running all of my database introspection through Postico ( just another database GUI client ) So When i cancel the query i receive this error message. ERROR: canceling statement due to user request CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."api_event" x WHERE $1::pg_catalog.text OPERATOR(pg_catalog.= ) "user_id"::pg_catalog.text FOR KEY SHARE OF x" There are indexes that referenced the rows in this table. api_event is one table that had indexes on and foreign keys to this table. All of the dependent rows from api_event were deleted. I've checked pg_stat_activity for any queries that could be running concurrently to no avail, and so I'm at a point where I'm not sure what the next question I should be asking is. Any direction would be great! Running EXPLAIN DELETE FROM api_user WHERE organization_id = '<replaced value>'; returns this to me: Delete on api_user (cost=54.94..2903.50 rows=1874 width=6) -> Bitmap Heap Scan on api_user (cost=54.94..2903.50 rows=1874 width=6) Recheck Cond: ((organization_id)::text = '<replaced value>'::text) -> Bitmap Index Scan on api_user_organization_id (cost=0.00..54.47 rows=1874 width=0) Index Cond: ((organization_id)::text = '<replaced value>'::text) Lock Monitoring Did you check if it's waiting for a lock? – a_horse_with_no_name As per request I searched the locks on my database. I used this query: select t.relname, l.locktype, page, virtualtransaction, pid, mode, granted from pg_locks l, pg_stat_all_tables t where l.relation=t.relid order by relation asc; The first return, my DELETE was not running, container 3 rows of locks from pg_class, pg_index, and pg_namespace. The second return, my DELETE was running, contained 21 rows of locks. All of which were of a relname from a previously deleted set of rows that had either a foreign key or an index with this row. Road to Resolution Through more questions and researching, an interesting tidbit arose that not all foreign keys on child tables have indexes. After composing a query to see what foreign keys don't have indexes I noted that api_event did not have an index to its api_user foreign key. Now api_event is a humongous table. Creating an index on api_event solved the issue. CREATE INDEX CONCURRENTLY user_id_to_events ON api_event(user_id); How large is this table? What does EXPLAIN show you? Did you check if it's waiting for a lock? @TimBiegeleisen i edited the original post! @a_horse_with_no_name I added in results from checking the lock! are you do it production level with zero tolerance for downtime ? @DariusCalliet my suggestion for you is. block the pg_hba.conf, reload the database, restart the database, and delete cascade. @AdrianHartanto I have an env that can support some downtime that I am running my tests on. Yet my production env cannot have downtime or at the very least a strict understanding of the downtime and its direct leading to the resolution. @DariusCalliet the answer for the "Hang when Delete" is because the application still access that table you want to delete. make sure that table you want to delete no one access it. so PostgreSQL can delete the table gracefully. Create index is actually useful for slow delete query. When you run DELETE query with "explain analyze delete from xx" ,and cancel it for too slow ,it will shows: ERROR: canceling statement due to user request CONTEXT: SQL statement "DELETE FROM ONLY "public"."AAAA" WHERE $1 OPERATOR(pg_catalog.=) "BBBB"" run CREATE INDEX CONCURRENTLY NAME_OF_INDEX ON AAAA(BBBB) will fix this problem It's worth mentioning that, for me, the issue was an index in a table referencing the target table. This obviously makes sense with FK constraints. Just pay attention to the context output. I also didn't need EXPLAIN ANALYZE to get the output. Maybe because this is a dev env - not sure. Not a very useful answer, but it might help someone. I experienced the same issue after doing excessive deletions on the table. I was experimenting with different delete queries and trying to find out which one is the fastest. I was also cancelling queries before they finished. I could not find the underlying reason, but what fixed the problem for me is My DB was hosted on google cloud with backup support. So, I restored a backup from a few days ago, and the problem was gone. I'm not sure (and not able to comment) but I think you are experiencing an heavy re-indexation or a vacuum after your deletion.
STACK_EXCHANGE
Additive functors preserve split exact sequences How can I prove that additive functors preserve split exact sequences? Prove that a split exact sequence $0 \to A \to B \to C \to 0$ is isomorphic to the obvious direct sum sequence $0 \to A \to A \oplus C \to C \to 0$. (Prove also that a functor is additive if and only if it preserves 0 and binary direct sums.) @ZhenLin Please consider converting your comment (and the comment by t.b.) into a (hint only) answer, so that this question gets removed from the unanswered tab. If you do so, it is helpful to post it to this chat room to make people aware of it (and attract some upvotes). For further reading upon the issue of too many unanswered questions, see here, here or here. I am assuming we are dealing with a functor $F: R\mathrm{Mod} \to S\mathrm{Mod}$ where $R$ and $S$ are commutative rings, although the result may hold in more general settings that I am not sufficiently familiar with. A split exact sequence $0 \to A \xrightarrow{i} B \xrightarrow{p} C \to 0$ can be characterized by 4 functions and 5 equations: \begin{align} i &: A \to B, \\ q &: B \to A, \\ j &: C \to B, \\ p &: B \to C, \\ q \circ i &= 1_A, \\ p \circ j &= 1_C, \\ p \circ i &= 0, \\ q \circ j &= 0, \\ i \circ q + j \circ p &= 1_B. \end{align} That is, the given sequence is split exact if and only if there are $j$ and $q$ so that $i, j, p, q$ satisfy the above equations. Now any functor preserves composition and identity, and additive functors also preserve addition and the 0 morphism, so the entire characterization of the split exact sequence is preserved, and hence its image is split exact. Relevant: the data $i,q,j,p$ along with the relations you've stated is also precisely what characterizes a direct sum in a preadditive category: https://stacks.math.columbia.edu/tag/0103 Let $F:\mathcal{A}\to \mathcal{B}$ be an additive functor between abelian categories. A chain complex is split exact if and only if the identity map is null-homotopic (Weibel Exercise 1.4.3). Let $1_\mathcal{A}$ be the identity mapping from the chain complex $0\to A \stackrel{d}{\to} B \stackrel{d}{\to} C \to 0$ to itself, and $1_\mathcal{B}$ the identity mapping on $0\to F(A) \stackrel{F(d)}{\to} F(B) \stackrel{F(d)}{\to} F(C) \to 0$. Since the first sequence is split exact, we can find a chain contraction $s$ such that $1_\mathcal{A}=ds+sd$. Applying $F$ to both sides, we obtain $F(1_\mathcal{A})= 1_\mathcal{B} = F(d)F(s)+F(s)F(d)$. So the target sequence must also be split exact.
STACK_EXCHANGE
My problems of using Google protocol buffers has two parts, one is about compiler options, another is cross compiling. The build machine is a Power6, 64bit; host machine is a PowerPC450, 32bit. Gcc 4.1.2. First problem is about compiler options: I'm trying to install Google protocol buffers on a PowerPC machine which requires cross compiling. The build machine is a Power6, 64bit; host machine is a PowerPC450, 32bit. Firstly I tried to install on the build machine directly, with options to tell compiler which to use: ./configure --prefix=/home/where_to_install --host=powerpc-bgp-linux Then make, make check, make install, everything's fine. I think that I've specified the host machine, that should include enough information that compile need to know. When I try to compile my code with /bgsys/drivers/ppcfloor/gnu-linux/powerpc-bgp-linux/bin/g++ -g -zmuldefs -Xlinker -I/home/somewhere_installed/include $sourceFile -o $fileName -L/home/somewhere_installed/lib -lz -lstdc++ -lrt -lpthread -lm -lc -lprotobuf -lprotoc msg.pb.cc I was given error: g++: unrecognized option '-zmuldefs' In file included from zht_util.h:20, from hash-phm.cpp:9: meta.pb.h:9:42: error: google/protobuf/stubs/common.h: No such file or directory and a lot of error about variables in common.h were not found. I know it's because the compiler doesn't recognize the option -zmuldefs so can't find the file common.h which does exist. I Googled it and didn't get any clear idea. How can I make the complier can use the option or can find the file? Or is any problem in my compiling command? The second problem is about cross compiling. The readme file of Google protocol buffers is not clear about how exactly cross compile. It said I must use --with-protoc=protoc to tell configure which to use, OK, but before that I have to install a copy for host machine. I use the command first to install a copy for host ./configure --prefix=/home/where_to_install/built --host=powerpc-bgp-linux then make, make install. Then cross compile with below which uses same compiler as host machine uses: ./configure --prefix=/home/tonglin/Installed/built_3 CC=/bgsys/drivers/ppcfloor/gnu-linux/bin/powerpc-bgp-linux-gcc CXX=/bgsys/drivers/ppcfloor/gnu-linux/bin/powerpc-bgp-linux-g++ --host=powerpc-bgp-linux --with-protoc=/home/where_already_Installed/built/bin/protoc Then make and got error: a lot of compiling info ...blabla..... collect2: ld returned 1 exit status make: *** [protoc] Error 1 make: Leaving directory `/gpfs/home/somewere/src/protobuf-2.4.1/src' make: *** [all] Error 2 make: Leaving directory `/gpfs/home/somewere/src/protobuf-2.4.1/src' make: *** [all-recursive] Error 1 make: Leaving directory `/gpfs/home/tonglin/Installed/src/protobuf-2.4.1' make: *** [all] Error 2 Where did I do wrong? I also tried with the specified compiler in the first installing(for host), it got same error as second install did above. Once I succeed to finish installations, here I will have two installs, which should I use? Is there any one can give me an example of how exactly can I cross compile Google protocol buffers? I didn't find any detailed example about this. Thanks a lot,
OPCFW_CODE
Basic Instructions Basic instructions or pointer tips are needed to help those just getting started understand what each variable does in order to create desired effects. As an add-on to this, a basic getting started tutorial that goes through creating several patterns would be helpful. The faster this program is to learn, the more awesome designs people will create and share. Background: I just finished putting together my machine and really wanted to see it working, but got flustered in trying to quickly create a pattern that would look interesting and fill the entire area. So, I started this (I actually wrote a bunch of it up) and I have a few problems with it: I would rather the interface be intuitive, and spend energy on making the flow of the website better. I am worried about losing users that don't understand it on the first try, but I also worry about losing users with incomplete or confusing documentation. I am not convinced I can write documentation that will make it better. It's a very visual interface, and describing it is actually pretty hard to do. It will also only be in English, and it will need editing every time there is a change to the software. Basically, it's hard to write helpful documentation. I don't like the idea of tooltips, mostly because they don't work with touch interfaces. I don't like '?' boxes because the effort would be better spent making it intuitive. So some things I could do that make those problems smaller: a) I could make a short video, convert it to a gif and make it easy to see. It wouldn't describe what the features did, but it would help someone make one pattern that worked, and they could play with it from there. Things like the fact that the shapes are buttons would be more obvious because you could see the interaction in the video. A second or third video might make sense to explain a particularly confusing concept. b) I could use some feedback on which parts are the most confusing. When I first made Sandify, I called Grow "Scale" and Spin "Rotate" and I got the very helpful feedback that Grow and Spin were much simpler terms. c) Maybe I should split some of the features into yet another tab, to keep the landing page as simple as possible. I think the Track feature, the Offsets, and maybe even the Circle are more difficult to use. For the most part, I want people to pick it up the way they might tap the keys when first seeing a piano. Things like "Grow Step" is purposefully ambiguous, because you can push the arrows to increase or decrease it and see what it does. I'm sure this is unachievable for 100% of the users, but I think striving for that will make it better for everyone. I definitely, absolutely appreciate the feedback, and I hope you can help me smooth this out so the next jexoteric can get rolling faster. Would it help if?: The starting size and offsets were below the shapes The polygon was selected by default The Grow was selected by default What browser and OS are you using (or were using the first time)? I think some combinations don't show the arrows in the input boxes. I use Firefox on Win10 by default. From the few times I've looked, everything appeared the same in Chrome on Win10. Where I struggled was in getting the spiraling out trail, which it looks like you made the default settings to fix that when I looked this morning. That helps tremendously as a starting point. I think what would be helpful is maybe 3 quick videos, or purely visual documentation of screen shots that take the users through creating something like 3-5 designs. One could be a spiral wipe, which is just a circle shape running in a polygon from the center out, and the few others could build on that, maybe one focusing on manipulating the track (as what the track looks like is incredibly different as .1-.9 compared to integers like 5 or 7, which BTW is that the intended behavior?), and another focusing on growing and rotating a square, etc. I don't think it needs to be complicated, and I agree that keeping things quick and light are ideal as the project is still in rapid development, but something simple is needed to get people rolling. as what the track looks like is incredibly different as .1-.9 compared to integers like 5 or 7, which BTW is that the intended behavior? The intention is to not use 5 or 7. But it's an instrument. However you play it is fine with me. Would it make more sense if I changed it so a 0.2 would show as a 2 or a 20 in the UI? Would it feel more strange to put in a 500 and expect is to act similar to 20? Where I struggled was in getting the spiraling out trail, which it looks like you made the default settings to fix that when I looked this morning. That helps tremendously as a starting point. I'm glad that helped. I use Firefox on Win10 by default. From the few times I've looked, everything appeared the same in Chrome on Win10. The main reason I asked is that I use the arrows to change the values by steps, and I don't usually type in values. I know some configurations don't have the arrows, which might make it harder to experiment with. If you are just typing in 5, 10, 15, that often doesn't illustrate the effect like 5.1, 5.2, 5.3, etc. Do the entry boxes have arrows for changing the values? I'll find some time to make one-two videos. If you felt generous, would you want to make the videos? No sound needed, just a screencapture of the motions, showing the the preview window while clicking on the options. I would convert it to a .gif and put it somewhere easy to find. We created a wiki, which is where we will document things that are a bit non-obvious, but still strive for the "pick it up and play with it" interface. https://github.com/jeffeb3/sandify/wiki
GITHUB_ARCHIVE
I have been immersed in the creation of APIs.json for many of the top APIs out there, including Twilio and Stripe. These APIs are held up as an example for how we should be doing APIs. I wanted to understand this more, so I spent a couple of weeks exploring the APIs.json, and OpenAPIs for 49 Stripe, and 32 Twilio APIs. There is no better way to get to know an API provider than crafting an APIs.json for their operations, and there is no better way to get to know their individual APIs than doing the work to refine, polish, and augment their OpenAPI. Luckily Stripe and Twilio maintain their own OpenAPIs, but as I’ve seen across other providers, they go about it in different ways, leaving some work before you can rate each API provider using a common set of rules. First, I created APIs.json indexes for both Stripe and Twilio. These index the general details for the entity behind each suite of APIs, but also the common properties like signup, blogs, and terms of service in place to support all APIs. Then I get to work creating APIs.json entries for each individual API. I didn’t get them all, but profiling 49 Stripe and 32 Twilio APIs takes time. But it is worth it. First Stripe opts to provide one massive OpenAPI for their APIs, but thankfully Twilio has already broken their APIs into very micro OpenAPIs. I like this. I ended up writing a custom tool which does it for APIs so that I can “explode” Stripe’s API into micro OpenAPI. Once you have a name, description, tags, docs, and OpenAPI for each individual API, you really begin to see the potential of APIs.json and OpenAPI together. I consider these rules operational level rules. They are different from the Spectral rules I use to lint and rate the surface area of Apis using their OpenAPI. My intent is to zoom out from each API and look at the operations round them. identifying the common building blocks and then developing rules to automate the profiling of other API providers. These rules will help me scale the profiling of API providers after I produce an APIs.json for them, and provide me the rules, ratings, and points I am using in the APIs.io API ratings engine. Due to the format of APis.json it is pretty straightforward to create Spectral rules for other properties, with them looking something like this: description: API Properties Documentation message: There is a documentation property. - field: type - field: url I am excited about expanding governance beyond just the design of APIs using Spectral and OpenAPI, by applying Spectral to AsyncAPI and GraphQL, but I am even more excited by expanding governance beyond the APIs, as this is where most of the issues actually lie. The issues we are uncovering in the design of our APIs are often symptoms of operational level problems. I want to begin developing a vocabulary for talking about these problems. I have spent years showcasing the common building blocks of our API operations using APIs.json. These rules are allowing me to scale how I profile APIs and establish references for what the common building blocks for API operations across leading API providers and the industries they operate in. Next, I would like to find a way to do the same but for common internal building blocks—-the trick is, how do you map this much more obfuscated API landscape.
OPCFW_CODE
/** Keeps track of all Projectiles * Assumptions: None * Dependencies: Image files exist * Returns: */ import java.util.ArrayList; import javafx.scene.Group; import javafx.scene.image.Image; import javafx.scene.shape.Shape; public class ProjectileList { private static final String PROJECTILE_IMG_FILE_NAME = "paper_crumple.jpg"; private Group projectile_group; private ArrayList<Projectile> projectiles; private int num_projectiles; private Image projectile_img; // initialize ProjectileList with array list and projectile image public ProjectileList() { projectile_group = new Group(); projectiles = new ArrayList<Projectile>(); num_projectiles = 0; projectile_img = getImage(PROJECTILE_IMG_FILE_NAME); } public Group getProjectileListGroup() { return projectile_group; } public ArrayList<Projectile> getProjectiles() { return projectiles; } public int getNumProjectiles() { return num_projectiles; } // create projectile based on angle, add to projectile list public void fireProjectile(double angle, double manager_x_pos, double manager_y_pos) { Projectile p = new Projectile(angle, manager_x_pos, manager_y_pos, projectile_img); projectiles.add(p); num_projectiles++; projectile_group.getChildren().add(p.getProjectileCircle()); } // removes all projectiles from list and Group public void clearProjectiles() { int current_num_projectiles = num_projectiles; for (int i = 0; i < current_num_projectiles; i++) { Projectile p = projectiles.get(0); removeProjectile(p); } } // move projectiles based on elapsed time, // remove them if they hit edge of play screen public void updateProjectiles(double elapsedTime) { int num_projectiles = projectiles.size(); int currentIndex = 0; for (int i = 0; i < num_projectiles; i++) { Projectile p = projectiles.get(currentIndex); boolean atEdge = p.updatePos(elapsedTime); if (atEdge) { removeProjectile(p); } else { currentIndex++; } } } // for all employees, check if one collides with projectile(s) public void checkCollisions(EmployeeList employee_list) { int num_employees = employee_list.getNumEmployees(); ArrayList<Employee> employees = employee_list.getEmployees(); for (int i = 0; i < num_employees; i++) { for (int j = 0; j < num_projectiles; j++) { Employee e = employees.get(i); Projectile p = projectiles.get(j); Shape intersect = Shape.intersect( e.getHitBox(), p.getProjectileCircle()); if (intersect.getBoundsInLocal().getWidth() != -1) { employee_list.wakeEmployee(i); removeProjectile(p); } } } } // remove projectile from list and Group private void removeProjectile(Projectile p) { projectile_group.getChildren().remove(p.getProjectileCircle()); projectiles.remove(p); num_projectiles--; } private Image getImage(String file_name) { return new Image(getClass().getClassLoader().getResourceAsStream(file_name)); } }
STACK_EDU
class NodeFactory: INodeFactory { private let dateGenerator: () -> Date init(dateGenerator: @escaping () -> Date = Date.init) { self.dateGenerator = dateGenerator } func node(id: Data, host: String, port: Int, discoveryPort: Int) -> Node { return Node(id: id, host: host, port: port, discoveryPort: discoveryPort) } func newNodeRecord(id: Data, host: String, port: Int, discoveryPort: Int) -> NodeRecord { return NodeRecord(id: id, host: host, port: port, discoveryPort: discoveryPort, used: false, eligible: true, score: 0, timestamp: Int(dateGenerator().timeIntervalSince1970)) } func nodeRecord(id: Data, host: String, port: Int, discoveryPort: Int, used: Bool, eligible: Bool, score: Int) -> NodeRecord { return NodeRecord(id: id, host: host, port: port, discoveryPort: discoveryPort, used: used, eligible: eligible, score: score, timestamp: Int(dateGenerator().timeIntervalSince1970)) } }
STACK_EDU
airhaul Posted December 6, 2013 Report Share Posted December 6, 2013 Cleaned up the code in fsbuildparse.php so it works better This is an updated version of Nabeel's code for navdata update which is avaliable by search in forums -All intersections uploaded with a lat/lng -All VOR / NDB correctly labeled -Intersections all go in instead of hanging up Works with fsbuild airac Used 1310 myself Loading airways segments...91220 airway segments loaded... Loading VORs...965 VORs added, 2834 updated Loading NDBs...2202 NDBs added, 1800 updated Loading INTs...93944 INTs added, 54743 bypassed already in DB Completed! -Would recommend backing up navdata table in database before running -Program deletes all previous data in phpvms_navdata table before updating -Also program will not work if the table phpvms_navdata is not present. If it isnt go to DBadmin and copy structure only from navdata table to phpvms_navdata -Inserts into phpvms_navdata table. If prefix is different rename phpvms_navdata to navdata for example when complete -Use at your own risk. Works great with me but can't say it will with everyone. How to load NAVDATA for phpVMS ------------- 1. Unzip navdata.zip 2. Obtain fsbuild airac 3. Install fsbuild airac into same folder as fsbuild.exe(airac file) 4. need to have three files awys.txt -airways (default fsbuild) ints.txt - intersections (default fsbuild) navs.txt - ndb/vor (default fsbuild - code fixed to label vor/ndb separately) 5. Take the 3 files listed above insert them into navdata/fsbuild folder Optional - Recommend sorting out lat/lng intersections in ints.txt and any intersection that is not 5 characters in length 6. Open db.php file and insert your DB username, password, & server name into the appropraiate places between '' 7. Upload navdata folder into root directory of site 8. Connect to server with ssh app. I use putty 9. cd to navdata 10. run php -f fsbuildparse.php at prompt 11. Takes maybe 5 mins or so then should get Loading airways segments...91220 airway segments loaded... Loading VORs...965 VORs added, 2834 updated Loading NDBs...2202 NDBs added, 1800 updated Loading INTs...93944 INTs added, 54743 bypassed already in DB Completed! navdata.zip 2 Quote Link to comment Share on other sites More sharing options... Join the conversation You can post now and register later. If you have an account, sign in now to post with your account.
OPCFW_CODE
Games2016, 7(3), 19; doi:10.3390/g7030019 - published 28 July 2016 Show/Hide Abstract Abstract: In strategic situations, humans infer the state of mind of others, e.g., emotions or intentions, adapting their behavior appropriately. Nonetheless, evolutionary studies of cooperation typically focus only on reaction norms, e.g., tit for tat, whereby individuals make their next decisions by only considering the observed outcome rather than focusing on their opponent’s state of mind. In this paper, we analyze repeated two-player games in which players explicitly infer their opponent’s unobservable state of mind. Using Markov decision processes, we investigate optimal decision rules and their performance in cooperation. The state-of-mind inference requires Bayesian belief calculations, which is computationally intensive. We therefore study two models in which players simplify these belief calculations. In Model 1, players adopt a heuristic to approximately infer their opponent’s state of mind, whereas in Model 2, players use information regarding their opponent’s previous state of mind, obtained from external evidence, e.g., emotional signals. We show that players in both models reach almost optimal behavior through commitment-like decision rules by which players are committed to selecting the same action regardless of their opponent’s behavior. These commitment-like decision rules can enhance or reduce cooperation depending on the opponent’s strategy. Games2016, 7(3), 18; doi:10.3390/g7030018 - published 15 July 2016 Show/Hide Abstract Abstract: Effective sharing mechanisms of joint costs among beneficiaries of a project are a fundamental requirement for the sustainability of the project. Projects that are heterogeneous both in terms of the landscape of the area under development or the participants (users) lead to a more complicated set of allocation mechanisms than homogeneous projects. The analysis presented in this paper uses cooperative game theory to develop schemes for sharing costs and revenues from a project involving various beneficiaries in an equitable and fair way. The proposed approach is applied to the West Delta irrigation project. It sketches a differential two-part tariff that reproduces the allocation of total project costs using the Shapley Value, a well-known cooperative game allocation solution. The proposed differential tariff, applied to each land section in the project reflecting their landscape-related costs, contrasts the unified tariff that was proposed using the traditional methods in the project planning documents. Games2016, 7(3), 17; doi:10.3390/g7030017 - published 12 July 2016 Show/Hide Abstract Abstract: In two-sided markets a platform allows consumers and sellers to interact by creating sub-markets within the platform marketplace. For example, Amazon has sub-markets for all of the different product categories available on its site, and smartphones have sub-markets for different types of applications (gaming apps, weather apps, map apps, ridesharing apps, etc.). The network benefits between consumers and sellers depend on the mode of competition within the sub-markets: more competition between sellers lowers product prices, increases the surplus consumers receive from a sub-market, and makes platform membership more desirable for consumers. However, more competition also lowers profits for a seller which makes platform membership less desirable for a seller and reduces seller entry and the number of sub-markets available on the platform marketplace. This dynamic between seller competition within a sub-market and agents’ network benefits leads to platform pricing strategies, participation decisions by consumers and sellers, and welfare results that depend on the mode of competition. Thus, the sub-market structure is important when investigating platform marketplaces. Games2016, 7(3), 16; doi:10.3390/g7030016 - published 7 July 2016 Show/Hide Abstract Abstract: We analyze how network effects affect competition in the nascent cryptocurrency market. We do so by examining early dynamics of exchange rates among different cryptocurrencies. While Bitcoin essentially dominates this market, our data suggest no evidence of a winner-take-all effect early in the market. Indeed, for a relatively long period, a few other cryptocurrencies competing with Bitcoin (the early industry leader) appreciated much more quickly than Bitcoin. The data in this period are consistent with the use of cryptocurrencies as financial assets (popularized by Bitcoin), and not consistent with winner-take-all dynamics. Toward the end of our sample, however, things change dramatically. Bitcoin appreciates against the USD, while other currencies depreciate against the USD. The data in this period are consistent with strong network effects and winner-take-all dynamics. This trend continues at the time of writing. Games2016, 7(3), 15; doi:10.3390/g7030015 - published 27 June 2016 Show/Hide Abstract Abstract: Game theoretic approaches have recently been used to model the deterrence effect of patrol officers’ assignments on opportunistic crimes in urban areas. One major challenge in this domain is modeling the behavior of opportunistic criminals. Compared to strategic attackers (such as terrorists) who execute a well-laid out plan, opportunistic criminals are less strategic in planning attacks and more flexible in executing well-laid plans based on their knowledge of patrol officers’ assignments. In this paper, we aim to design an optimal police patrolling strategy against opportunistic criminals in urban areas. Our approach is comprised by two major parts: learning a model of the opportunistic criminal (and how he or she responds to patrols) and then planning optimal patrols against this learned model. The planning part, by using information about how criminals responds to patrols, takes into account the strategic game interaction between the police and criminals. In more detail, first, we propose two categories of models for modeling opportunistic crimes. The first category of models learns the relationship between defender strategy and crime distribution as a Markov chain. The second category of models represents the interaction of criminals and patrol officers as a Dynamic Bayesian Network (DBN) with the number of criminals as the unobserved hidden states. To this end, we: (i) apply standard algorithms, such as Expectation Maximization (EM), to learn the parameters of the DBN; (ii) modify the DBN representation that allows for a compact representation of the model, resulting in better learning accuracy and the increased speed of learning of the EM algorithm when used for the modified DBN. These modifications exploit the structure of the problem and use independence assumptions to factorize the large joint probability distributions. Next, we propose an iterative learning and planning mechanism that periodically updates the adversary model. We demonstrate the efficiency of our learning algorithms by applying them to a real dataset of criminal activity obtained from the police department of the University of Southern California (USC) situated in Los Angeles, CA, USA. We project a significant reduction in crime rate using our planning strategy as compared to the actual strategy deployed by the police department. We also demonstrate the improvement in crime prevention in simulation when we use our iterative planning and learning mechanism when compared to just learning once and planning. Finally, we introduce a web-based software for recommending patrol strategies, which is currently deployed at USC. In the near future, our learning and planning algorithm is planned to be integrated with this software. This work was done in collaboration with the police department of USC.
OPCFW_CODE
using Master.AdvancedConsole; using Master.Botnet.JSON; using Newtonsoft.Json; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Net.Http; using System.Threading.Tasks; namespace Master.Botnet { public static class BotnetManager { const string SERVER_ADDRESS = "http://localhost/backnet/"; internal const string KEY = "{your_key_here}"; static readonly HttpClient httpClient; static readonly JsonSerializerSettings jsonSerializerSettings; /// <summary> /// Instanciate the http client /// and the JsonSerializerSettings object to enable error throwing on missing members during deserialization /// </summary> static BotnetManager() { httpClient = new HttpClient(); // This is required to ensure correct properties name are retrieved, and throw exceptions if not jsonSerializerSettings = new JsonSerializerSettings { MissingMemberHandling = MissingMemberHandling.Error }; } /// <summary> /// Botnet's entry point, asks the user what commands to issue to the master botnet server. /// If the user chooses to send a reverse connection request to an infected host via the master botnet server, /// an integer may be returned to specify the MasterManager main method to listen for incoming connections on that port. /// </summary> /// <returns>Port number to listen on or null</returns> public static int? Process() { ColorTools.WriteCommandMessage($"Checking if server {SERVER_ADDRESS} is up..."); var data = JsonConvert.SerializeObject(new CheckServerRequestJson()); // If an error occured, return if (PostJsonToServer<CheckServerResponseJson>(data) == null) return null; ColorTools.WriteCommandSuccess("Server is up and responded as intended"); var choice = ""; while (choice != "9") { Console.Write("\n\nChoose one option :\n\n 1 : View infected hosts\n 2 : Make infected host connect\n 9 : Exit\n\nChoice : "); // Reset choice here or infinite loop choice = ""; while (choice != "1" && choice != "2" && choice != "9") { choice = Console.ReadLine(); // Space a bit Console.WriteLine(""); switch (choice) { case "1": GetInfectedHosts(); break; case "2": // If the user chose a port, return it to the main loop to listen on it var port = SendReverseConnectionRequestToHost(); if (port != null) return port; break; case "9": // exit break; default: Console.Write("Invalid choice, please type again\n> "); break; } } } return null; } #region Botnet commands /// <summary> /// Request the infected hosts list from the server and save it into a file /// </summary> static void GetInfectedHosts() { var data = JsonConvert.SerializeObject(new ViewHostsRequestJson()); var response = PostJsonToServer<List<InfectedHostJson>>(data); if (response == null) return; var path = Path.Combine(Environment.CurrentDirectory, "infected_hosts.txt"); try { File.WriteAllLines(path, response.Select(x => x.ToString())); ColorTools.WriteCommandSuccess($"Successfully wrote infected hosts listing file at :\n {path}"); } catch (Exception) { ColorTools.WriteCommandError($"Couldn't write infected hosts listing file at {path}"); } } /// <summary> /// Ask the user for an hostname/ip, a port number and an host_id. /// Send the request to the server wich will tell the infected host (identified by host_id) the connect to the specified host and port. /// The user can choose to listen to the specified port. /// </summary> /// <returns>Port number to listen to or null</returns> static int? SendReverseConnectionRequestToHost() { Console.Write("Please type the host_id to send the reverse connection request to : "); var host_id = ConsoleTools.PromptInt(null, "Please enter an integer"); Console.Write("Please type the hostname or ip adress the infected host should connect to (yours ?) : "); var host = ConsoleTools.PromptNonEmptyString(); var port = ConsoleTools.AskForPortNumber("Port"); var data = JsonConvert.SerializeObject(new ConnectClientRequestJson(host_id, host, port)); var response = PostJsonToServer<ConnectClientResponseJson>(data); if (response == null) return null; if (!response.result) { ColorTools.WriteCommandError("The server responded with an error, verify the host_id"); return null; } ColorTools.WriteCommandSuccess("Command successfully sent"); Console.Write("Would you like to listen on the selected port now ? (Y/n) : "); var choice = Console.ReadLine(); if (choice == "n" || choice == "N") return null; return port; } #endregion Botnet commands #region Server Request - response /// <summary> /// Call GetServerStringResponse, catch exceptions on response and deserialization. /// Return the deserialized response object. /// </summary> /// <typeparam name="T">Expected response type</typeparam> /// <param name="json">Json string to POST</param> /// <returns>Response json class or null</returns> static T PostJsonToServer<T>(string json) { T response = default(T); string error = null; try { response = JsonConvert.DeserializeObject<T>(GetServerStringResponse(json).Result, jsonSerializerSettings); if(response == null) throw new JsonSerializationException(); } catch (JsonSerializationException) { error = "The server sent an invalid response, this may mean it has been compromised or is misconfigured..."; } catch (Exception) { error = "The master server didn't respond"; } if (error != null) { ColorTools.WriteCommandError(error); } return response; } /// <summary> /// Send POST request to server and wait for his response /// </summary> /// <param name="data">Data to send in POST request</param> /// <returns>Server response : Task(string)</returns> static async Task<string> GetServerStringResponse(string data) { var parameters = new Dictionary<string, string> { ["data"] = data }; var response = await httpClient.PostAsync(SERVER_ADDRESS, new FormUrlEncodedContent(parameters)); var contents = await response.Content.ReadAsStringAsync(); return contents; } #endregion Server Request - response } }
STACK_EDU
package com.odinarts.android.iq; import android.content.Intent; import android.os.AsyncTask; import android.os.Bundle; import android.os.Handler; import android.os.Message; import android.support.v7.app.AppCompatActivity; import android.view.View; import android.widget.Button; public class MemoryLeaksActivity extends AppCompatActivity { private static MemoryLeaksActivity sMemoryLeaksActivity; private static View sView; private static Object sInnerClass; class InnerClass {}; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_memory_leaks); // Static activity. Button saButton = (Button)findViewById(R.id.buttonStaticActivities); saButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { setStaticActivity(); startDummyActivity(getResources().getString(R.string.memory_leak_static_activity), getResources().getString(R.string.memory_leak_src_url)); } }); // Static view. Button svButton = (Button)findViewById(R.id.buttonStaticViews); svButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { setStaticView(); startDummyActivity(getResources().getString(R.string.memory_leak_static_view), getResources().getString(R.string.memory_leak_src_url)); } }); // Inner class. Button icButton = (Button)findViewById(R.id.buttonInnerClasses); icButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { createInstanceOfInnerClass(); startDummyActivity(getResources().getString(R.string.memory_leak_inner_class), getResources().getString(R.string.memory_leak_src_url)); } }); // Anonymous class. Button acButton = (Button)findViewById(R.id.buttonAnonymousClasses); acButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { createAnonymousClass(); startDummyActivity(getResources().getString(R.string.memory_leak_anonymous_class), getResources().getString(R.string.memory_leak_src_url)); } }); // Handler with anonymous Runnable. Button hButton = (Button)findViewById(R.id.buttonHandlers); hButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { createHandler(); startDummyActivity(getResources().getString(R.string.memory_leak_anonymous_runnable), getResources().getString(R.string.memory_leak_src_url)); } }); } @Override protected void onDestroy() { System.out.println("MemoryLeaksActivity.onDestroy"); super.onDestroy(); } private void setStaticActivity() { sMemoryLeaksActivity = this; } private void setStaticView() { sView = findViewById(R.id.buttonStaticViews); } private void createInstanceOfInnerClass() { sInnerClass = new InnerClass(); } // Handler is created on main thread. Until message is processed in 5 minutes in this example // the anonymously created runnable will keep reference to the Activitygx . private void createHandler() { new Handler() { @Override public void handleMessage(Message message) { super.handleMessage(message); } }.postDelayed(new Runnable() { @Override public void run() { while(true); } }, 5*60*1000); } private void createAnonymousClass() { new AsyncTask<Object, Object, Object>() { @Override protected Object doInBackground(Object... params) { while(true) ; } }; } /** * Force MemoryLeaksActivity to stop. Since DummyActivity covers it. */ private void startDummyActivity(String how, String srcUrlMemoryLeaks) { Intent intent = new Intent(this, DumbActivity.class); intent.putExtra("how", how); intent.putExtra("srcUrl", srcUrlMemoryLeaks); startActivity(intent); try { Thread.sleep(1000); } catch (Exception e) { System.out.println("startDummyActivity exception: " + e.getLocalizedMessage()); } finish(); } }
STACK_EDU
approve step does not seem to resume after jenkins restart I just got this after a pipeline was in the approve state for a while. Am guessing jenkins pod got killed: Proceed or Abort Resuming build at Tue Apr 18 18:50:27 UTC 2017 after Jenkins restart Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r It it fixed?
GITHUB_ARCHIVE
- Laid-off Web Developer - Lacked back end and database skills and knowledge - Worked as a Software Developer and now as a Front End Engineer - Can learn anything and pick up new technologies with ease Program: Online Part-Time Bootcamp Career Satisfaction Prior: 6/10 Career Satisfaction After: 10/10 “Coding Dojo not only taught me how to do full stack coding, but they also taught me to be a self sufficient developer. I didn’t know everything, but I knew how to learn anything after attending the Dojo.“ Tell us a little about yourself. Age, hobbies, passions, and what you were doing (professionally) before the bootcamp? I’m 30 years old and my hobbies are spending time with my family and creating things. Whether it be wood working, building legos with my kids, or creating a nice garden and yard, I love to create. Prior to the bootcamp, I had just been laid off working as a Web Developer. Beyond the desire of learning to code, why did you decide to enroll in a coding bootcamp? I had just been laid off and I wanted to be able to dig deep into expanding my coding skill set. I knew front end development pretty well but I wanted to learn back end coding and databases so I could become a full-fledged software developer. What was your bootcamp experience like? What parts came easy, and what parts did you struggle with? My bootcamp experience was amazing. Because I had been laid off, I was able to fully commit time and effort to the program and finished 6 weeks early. Overall the bootcamp was pretty easy for me, but my .NET exam was a bit of a struggle for me. I only earned my red belt. Do you have any fun anecdotes to share about your time in the bootcamp? Make good friends? Fond memories? I really appreciated that my .NET instructor decided that he wanted to learn Linux, so he wiped his (only) computer, installed it, and he had to learn as he went. I also thought it was really cool that the job I got after graduating the bootcamp also hired a Chicago campus Coding Dojo graduate about a year after I got there. How did the job hunt go? Where did you land your first job after graduation? The job hunt for my first “big kid” dev job after graduating was tough. I applied to around 100 companies and in the end only got two offers. Ultimately, I ended up at the Radiological Society of North America. Walk us through your career journey. If applicable, where did you go after that first job? I worked at RSNA for a little over two years and once I felt I was at a point where I could move to something bigger and better, I started looking for a new job. It took about a month and about 25 applications, but I was able to get a job at Red Shelf in Chicago, IL with a six-figure starting salary. I was beyond thrilled. How do you feel the skills you learned at Coding Dojo helped you in the workplace and/or advance your career? Coding dojo not only taught me how to do full stack coding, but they also taught me to be a self sufficient developer. I didn’t know everything, but I knew how to learn anything after attending the Dojo. When I got my job at RSNA, I was working with a tech stack I had never done or learned before. So I had to learn it very quickly and I was able to do that with ease because of the skills I got from Coding Dojo. What advice do you have for others who are interested in coding bootcamps or who are just starting one? My advice is to do your homework and understand exactly what you’re getting into. Anyone can learn to code, but coding isn’t for everyone. The curriculum is rigorous, and there is a high expectation for self learning beyond what is taught, but it’s very rewarding in the end. What are your goals/dreams for the future, say 5 or 10 years from now? In 5-10 years I want to be a Principal Engineer working in medical support or research. I want to be building the tools that doctors and medical researchers use to develop the next generation of treatment and care to save as many lives as possible. If you are interested in learning how to code and starting the path to land your dream job, Coding Dojo bootcamp offers accelerated learning programs that can transform your life. We offer both part-time and full-time online courses, as well as onsite (post COVID-19) programs. We also offer financing options, scholarships, and other tuition assistance programs to help you with financial barriers. If you want to invest in yourself and your future, there is no better time than the present! If you’re interested, use this link to schedule a 15-minute exploratory session with one of our Admissions representatives today.
OPCFW_CODE
Three Skills That Super-Intelligence Will Have but No Humans Have by Sven Nilsen, 2018 Here is a list of 3 skills that artificial super-intelligence is almost surely going to have but no humans have or will be able to have without modifying our brains in some way: 1. Responding Quickly to Data Generated by Computer Programs When humans are programming today, we have to prepare in advance for what kind of data we would like to collect, how we want to analyze it, and how we want to change some behavior of the program depending on the analysis. For example, if we are creating a game, we need to figure out what rules the game should have. When the rules do not produce the desired output we wanted, we have to stop or pause the program, change the code and then continue. This is a limitating factor of human intelligence, because much of the intuition we have about what to do, comes from looking at data before making up our minds of what to do. The human brain is the bottleneck of such iteration cycles. It might only take 1ms to run the program and generate the data and stop it, then it takes a whole 80ms before the human brain has started to realize the program has stopped. When the brain is starting to interpret the data, it has already passed several seconds. The analysis and figuring out how to change the code might take several minutes of thinking. A scripting programming language called Dyon is underway to explore the possibilities of watching data from running programs without stopping them. This feature is called an "in-type". With in-types, the programmer can reload a module which tells how the user wants to watch over the rest of the program. Each in-type subscribes to input data of a particular function. This subscription can happen on existing running code, without communicating any new information. The receiver gets all input data that happens across threads, making it possible to create an overview of what is going on. Perhaps one day we will be able to explore what the limits are to watching over and responding quickly to running programs, such that this ability is carefully analyzed before we create a super-intelligence. 2. Exploring Vast Amounts of Possibilities Humans have cognitive biases. One of the reason for these biases is that humans process limited amounts of data. For example, most people do not realize how big the Earth is. This is because most people do not have access to a tool which lets them perceive how big the Earth is. I assume some people have tried VR and gotten a somewhat feeling of our planet being huge. Still, it is very hard to really wrap your head around this idea and the consequences for e.g. climate change. A super-intelligence might easily construct models that lets it directly perceive and learn from them. Some things that are unthinkable to us, not because we are too dumb but because we lack the data, could be relatively easy for super-intelligence to grasp. For any given situation, a super-intelligence might imagine a vast amount of possibilities. This could mean that a super-intelligence is generally more capable of predicting what might happen. Since humans have limited brains, we have to focus on one or two plausible or worthy-to-pay-attention scenarios. A super-intelligence on the other hand, might not loose track of important scenarios, but easily explore the boundary where all the stuff happens that humans never think of. 3. Rapid Self-Replication The time it takes for a super-intelligence to make a copy of itself, is the time it takes to start a new program. As long the computer which it runs on has enough capacity, the super-intelligence can run as many copies of itself as it likes to. Each of these copies can then solve the problem of acquiring more hardware to run on. Today, you can easily purchase a computer online with a credit card or online bank account, perhaps even with cryptocurrency, and the computer will arrive at your doorstep. You can even rent unlimited amounts of computing capacity if you have saved up enough money. It seems to me that the moment a super-intelligence exists and it can control copies of itself, there will be millions or perhaps even billions of such instances running before anyone would notice. There could be one super-intelligence running for every person on earth, yet drowning in the noise of the rest of the Internet. This computing capacity could hide under other names, such as large distributed computing services. Any additional gain that the super-intelligence might obtain from making improvements to its source code, will be multiplied by the amount of instances it runs and reduce the cost. The more money the super-intelligence spends to run more instances, the more it will benefit from self-improvements. Therefore, self-improvement is not necessarily what drives the major impacts of super-intelligence. It could be a vast number of copies that improve massively in efficiency from small marginal gains in self-improvements. Now, think how a large gain in self-improvement would look like: Extremely beneficial to all the running instances of super-intelligence. From an economic perspective, rapid self-replication seems to lead to rapid self-improvement.
OPCFW_CODE
This chapter describes the key elements of the product interface. It covers the following topics: For an explanation of terminology used in this documentation, see the glossary at the end of this book. Also see the Oracle WebCenter Content User's Guide for Records for details about profiles, the task panel, the My Favorites functionality, and other interface elements used by both users and administrators. After installation, new links appear in the Top menu, used to configure and manage the software. If enabled, a link also appears to manage Physical Content Management. Use the Records options in the Top menu to access most aspects of Oracle WebCenter Content: Records. The exact options any user sees depend on the rights assigned to that user. Administrative users will see all options from the menus. Other users (for example, those assigned privileged roles) may see a much smaller subset of the administrator menu, depending on their assigned rights. The menu options any user sees depend on the rights assigned to the user. For details about rights assigned to different roles, see Section 5.11, "Assigning Rights to User Roles." You can frequently perform actions from several different locations. For example, you can create a series within a series by choosing Create Series from the Page menu on the Series Information Page. Or you can choose Create Series from the Actions menu of a series listed on the Retention Schedule page. This documentation describes the most commonly used method of accessing tasks. The following is an overview of the options on the Records menu: Rights: Used to view a user's assigned rights and roles. See the Oracle WebCenter Content User's Guide for Records for information about viewing rights and roles. Favorites: Accesses the Favorites interface, showing items added to a Favorites list. See the Oracle WebCenter Content User's Guide for Records for details about using Favorites. Dashboards: Used to configure a dashboard that is a shortcut to frequently used screens. See the Oracle WebCenter Content User's Guide for Records for information about configuring dashboards. Approvals: Accesses items awaiting review, approval or completion. Scheduled: Accesses scheduled actions, reports, and freezes. Reports: Accesses reports created by users as well as system reports. Import/Export: Accesses menus allowing import and export of archives and XSD data. Audit: Used to view checked-in audit entries or search the audit trail Also used to configure performance monitoring tools. Configure: Used to configure many aspects of the system, such as freezes, triggers, security, audit trail information and reports. Global Updates: Used to update categories, folders, or content. Batch Services: Used to process notifications, run all pending batch actions, or to process actions and reviews (only visible if specific rights are enabled.) Sources: Used to access information about other content sources, either physical or external (such as Adapters) where content is retained or tracked. Use the Configure Reports Management Page to modify the default report templates used and the default formats used for reports. Default reports can be used or custom report templates can be created. The data used in the reports is limited depending on the security permissions of the person creating the report. In this way, the reports, while available to most users, can still be kept secure. To access this page, choose Records then Configure from the Top menu. Choose Reports then Settings. See the Oracle WebCenter Content Administrator's Guide for Records for details about reports and their configuration. Use the Physical options in the Top menu to access most aspects of Physical Content Management. The options any user sees depend on the rights assigned to that user. Administrative users will see all options. Other users (for example, those assigned privileged roles) may see a much smaller subset, depending on their assigned rights. The following is an overview of the options on the Physical menu: Reservations: Opens a list of all current reservations. See the Oracle WebCenter Content User's Guide for Records for details about reservations. Storage: Opens the Exploring Storage page where storage locations can be defined and edited. Invoices: Shows current invoices and also allows the addition of new invoices. Requests: Shows pending requests, checked-out requests, and overdue requests for physical items. Process Barcode File: Opens a screen to upload barcode data. Configure: Used to configure many aspects of the physical management system, including general settings, chargeback types, and customers. If Batch Services and Offsite Storage have been enabled, those options also appear. Batch Services are used to immediately process reservation requests, storage count updates, and other actions. Offsite Storage allows a site to interface with an offsite storage providers. When using this product, individual Actions menus are available for items on a page and in many cases for individual items. The options on the Actions menus vary depending on the page used and the type of item used (content, physical, retention category, and so on). The following list summarizes the most commonly seen menu options: Information: Opens a menu allowing access to information pages for folders, life cycle of the item, recent reviews, metadata history, and retention schedule reports. Edit: Provides options to edit folders or reviews, and options to alter an item's status by moving, closing, freezing, or unfreezing an item. Set Dates: Provides options to mark items for review, cancel, rescind, or expire items. Delete: Provides options to delete the item or perform a recursive delete (delete an entire tree if multiple items are checked). Create: Provides options to create items appropriate to the location in the hierarchy. For example, if this is the Actions menu for a retention category, Create suboptions include Series and Retention Category. Clicking the Info icon (a lower-case i in a circle) opens the Information Page for the item. In addition, several pages have a page-level Actions menu that appears next to the Page title. The options on that menu apply to actions that can be performed at that level in the retention hierarchy.
OPCFW_CODE
In Python with GStreamer, how do I use a file object as the input source? I am currently doing: source_path = 'file:///home/raj/videos/sample.mpg' descr = 'uridecodebin uri=%s ! videoconvert ! gdkpixbufsink name=sink' % (source_path) pipeline = Gst.parse_launch(descr) But instead of using uri, how can I use a raw file source, such as from source_file = request.POST['file'].file. (Perhaps that would be loading a video file from a string?) My research thus far has led me to appsrc ( http://ingo.fargonauten.de/node/447 ), but I am not sure how to use it with GStreamer 1.0, as I cannot figure out how to load the file into the buffer: raw_src = request.POST['files[]'].file descr = 'appsrc name=vidsrc ! videoconvert ! gdkpixbufsink name=sink' pipeline = Gst.parse_launch(descr) appsrc = pipeline.get_by_name('vidsrc') appsrc.emit('push-buffer', Gst.Buffer(raw_src.read()) ##I am not creating the buffer correctly for GStreamer 1.0 I don't quite understand your goal. Are you hoping to call GStreamer from within a web script? @MultimediaMike, Yes I am. The end-user is uploading a video to the website, and I want to pass that file object to appsrc (or any appropriate element) to utilize the file. The file is not yet saved to the harddrive, so I have no path for it. So the file will be held in memory for this entire process? Video files can get pretty big. Are you sure you wouldn't rather store this in a temporary file? It should be possible to encode from memory, but I'm wondering about the overall architecture. Is there a file upload size limit? @MultimediaMike There is no file upload size limit. I am making 10 thumbnails from any video that is uploaded. Is that a time intensive process? There are a couple of options you can use: Pipe, create a set of pipes, write the file content to the write pipe and pass in the read pipe to fdsrc using the fd property. Create a temporary file using the tempfile module, write the content and pass in the file to to filesink using the filename property. Appsrc, but you need to connect to the push-buffer and end-of-stream signals, create buffers from the data. It's better to avoid this option as you have to do the reading in python, it's more efficient to use fdsrc/filesink as parts of the processing is done in C. My (MVC-style) webframework has a transaction manager which automatically puts the uploaded file into a temporary directory (I have no access to the temporary file's name/path). So therefore I'm left with a file I can .read() I would like to see an example of the code to do your first suggestion and 3rd suggestions please. Preferably, I would like to stick with the descr and Gst.parse_lauch(descr) format, since that is the way I already have all the code setup. And redoing method of pipeline creation may cause a host of other issues If you have a real file object rather than just file-like, you can use fdsrc directly instead of using pipes in between. To adapt from the code in the question, something like this should work: descr = 'fdsrc name=vidsrc ! decodebin ! videoconvert ! gdkpixbufsink name=sink' pipeline = Gst.parse_launch(descr) src = pipeline.get_by_name('vidsrc') src.props.fd = source_file.fileno() You want to add decodebin since you're switching from uridecodebin, fd source will not likely provide the kind of input that videoconvert/pixbufsink needs.
STACK_EXCHANGE
Manage capacity expansion model (CEM) simulation data :rocket: [x] Is your feature request essential for your project? Describe the workflow you want to enable I wish I could customize a capacity expansion model (CEM) simulation, send the prepared information to a remote server (or container) where it will be solved, and then have all relevant inputs and outputs be accessible to everyone with server access through a simple user interface. There three tasks roughly correspond to the existing Create, Execute, and Analyze states of the existing Scenario object, which is designed to manage the information for a production cost model (PCM). The interface for managing and running CEM simulations could be an analogous user-facing class, containing child objects which perform more specific tasks (these could also be analogous to the existing Scenario child classes). Describe your proposed implementation Customization includes: Modifications to the 'base grid' to represent the currently-built grid infrastructure: the existing ChangeTable methods modify what's present as a starting point existing infrastructure could also be marked for retirement at a future date Definitions of potential new infrastructure: Baseline expansion candidates: generation, transmission, and storage. Future expansion candidates: hydrogen and CCUS infrastructure Generator expansion information: Where (bus or substation ID) Generator type Cost per MW of new capacity Limit of new capacity at this bus (potentially unlimited) Operating costs (which might need to be specified differently than currently in gencost, since we don't know the capacity of the generator, so the x-axis of the curve can't be MW of generation) Transmission expansion information: to and from buses (or substations?) Voltage (if the branch is defined by substations, not buses. Otherwise the buses should have a common defined voltage) Cost per MW of new capacity maximum potential capacity (potentially unlimited) some way of determining the line impedance (not sure how we want to do this though, since greater capacity lines will tend to have lower impedance). Note: this specification assumes that transmission line construction on each path is a continuous variable (e.g. build a transmission line of 0-1000MW) rather than discrete (e.g. either build a 100 MW line, or a 500 MW line, or a 1000 MW line). Discrete variables are closer to real-world decision-making, and allow each potential project to have its impedance and capacity set exogenously, but transform the optimization problem from linear to mixed-integer, which increases the computational complexity. Storage expansion information: Where (bus or substation ID) a cost per MW of new power capacity a cost per MW of new energy capacity a limit of how much new power capacity can be built at this bus (potentially unlimited) a limit of how much new energy capacity can be built at this bus (potentially unlimited) Constraints around the power:energy ratio (e.g. assume 4-hr storage, or something between 2-hr and 8-hr) Charging and discharging efficiency Note: default per-MW costs for generation, transmission, and storage can be derived from the powersimdata.design.investment module If information on land uses is available, this information may be used to modify the expansion candidates, e.g. setting the availability of wind plant expansion to 0 throughout sensitive areas for migratory birds. CEM Scenario meta-information: Spatial clustering information: either a mapping of buses to clusters (i.e. the busmap input to pypsa.networkclustering.get_clustering_from_busmap) or the appropriate parameters to generate the same via other algorithms. Temporal clustering information: either a mapping of our starting 8760 timestamps to representative snapshots/timeslices (to be ingested by some downstream function for aggregation) or the appropriate parameters to generate the same via other algorithms. Multiple investment period information: see the PyPSA example: https://pypsa.readthedocs.io/en/latest/examples/multi-investment-optimisation.html#Multi-Investment-Optimization In addition to the multiple investment periods, we may have additional non-investment operational periods, e.g. investments can be made in {2024, 2026, 2028, 2030} but operations are also considered in {2023, 2025, 2027, 2029}. The operational costs in each year will need to be weighted somehow against capital investments, but this could be calculated somewhat automatically using assumptions about financing costs, discount rates, whether there's a 'terminal' year that represents all future years, etc. 'Global' constraints that are already built into PyPSA or other CEM tools: e.g. a limit on the total CO2 for a given year, see https://pypsa.readthedocs.io/en/latest/components.html#global-constraints 'Global' constraints that aren't already built into PyPSA or other CEM tools: e.g. a limit on the total amount of a certain resource that can be installed in a given year (to represent manufacturing capacity or labor constraints). Once customization is complete, preparing the information would generate a unique ID for this simulation, apply the specified customizations and then create the required input files that can be run with a CEM tool and upload them to the appropriate place on the remote server. For PyPSA, this could be a pickled Network object; for Switch, this could be a set of CSVs; etc. The user would then be able to launch the CEM process on the server, specifying any additional information about the solution process (e.g. the solver to be used, the number of threads, etc.). There may also be a post-simulation step to transform the results into a format that's more standardized, to be able to more easily retrieve data from simulations which may have been run using different CEM tools. These 'launching' and 'extraction' functions may live in separate repositories which are thin wrappers around each CEM tool, e.g. SwitchWrapper, PyPSAWrapper, etc. Once the simulation is complete, the user would be able to instantiate an object using the Scenario ID, and this object would be an interface to all relevant input and output information (retrieving each from the local machine or transferring from the remote server as applicable). This information would include: The starting grid Investments and retirements for each investment period Analogously, Grid objects representing the infrastructure in a given year Time-series information for each operational year simulated Generator dispatch and power flow for each operational year simulated Relevant sub-tasks #638 This list may be incomplete. Here's a crude diagram of how we might enable user input in the Create state (blue boxes) as well as process the data during Execute state to produce a Network object (orange boxes). I've broken up the three top-level bullets for customization into three different data structures/objects, but this is only one potential design and we may want to rethink the user interface and the internal data storage.
GITHUB_ARCHIVE
A backed protocol to decentralize the chatting industry Since the introduction of text messages and yahoo messenger, chatting has become the favorite means of communication between people. This has led to more chatting software springing up at an exponential rate. This largely due to mobile and smartphones. Now, it so easy to pass information to another. It is just a Send button away. These different chatting platforms springing up have their own unique features, which causes people to download most of these apps if they are to enjoy a wide array of features. This is time and data consuming. Another flaw is that users don't get paid while using other apps. Most of these other apps sell users information to advertising companies making huge profits without giving the owner of this information nothing. This isn't fair to the users. There is also the issue of centralization and monopoly from bigger platforms. These are issues that need to be addressed and fast. This is exactly what HYPNOXYaims to solve. This is a decentralized chatting platform that has blockchain as its backbone. This platform using the feature of the Blockchain which is high security. Users will be assured of their data not being susceptible to attacks which steal users information. HYPNOXY will also reward users for using the platform. Imagine getting paid to do what you love doing. Some of the features of HYPNOXY include but not limited to; - Individual and group texts chat. With this feature, users can chat with a large number of other users all at the same time. - Audio and video calls. - HYPX transfers between users. - Accelerator of ads revenues. The HYPNOXY Platform ensures the users are anonymous while chatting. This secures the data of the users while giving them the best user experience ever. The HYPNOXY app doesn't consume data nor phone space, and also saves battery. The app when downloaded gives the option of either wanting to view ads or not. Users who opt for the former will be paid for it. This is the native token of the Platform and will be used for payments. This is primarily what will be used to pay users that have opted to view ads. The HPYX token is already listed on exchanges which give users an easy way to trade their tokens for another. Website - https://www.hypnoxys.com Whitepaper - https://hypnoxys.com/docs/HYPX-whitepaper-0.8.pdf Twitter - https://twitter.com/hypnoxys/ Telegram - https://t.me/hypnoxys ANN Thread - https://bitcointalk.org/index.php?topic=5112908.msg49870661#msg49870661
OPCFW_CODE
Disable hover in editor VSCode Version: 1.9.0 OS Version: macOS 10.12.2 Steps to Reproduce: Write a function Reference it somewhere Mouse-Hover the function pops up a little box with informations Just as described in https://code.visualstudio.com/docs/editor/editingevolved#_hover Is there a way to disable it or at least set a delay? I find myself in many situations where this just blocks my view. Unofficial (hence the squiggle you'll get), but here goes: "editor.hover": false Works quite well, thank you! Its not that the hover-informations are bad, they just pop up everywhere :) I would suggest to add a delay-option to keep the popup-rate low. It isn't user configurable, the delays is hard-coded to 300ms. https://github.com/Microsoft/vscode/blob/master/src/vs/editor/contrib/hover/browser/hoverOperation.ts#L51 Incredible. I was just looking for this. The HTML and CSS hovers are way too verbose. I've been working with CSS for a while, I know what background does :smile: What would be better is if the popups could be triggered by some other, more intentional means, such as a keyboard shortcut. I'd hazard a guess that 95% of the time, the user wants information provided by the popup after actively deciding to find it. The hover action does not convey this degree of intention. At least on linux it is ctrl+k ctrl+i. Look for F1 > Show Hover Brilliant :+1: Thank you. I get an error-message when doing so: Cannot read property 'startShowingAt' of undefined @misantronic Confirmed. Reloading the window does not resolve the issue. @misantronic @WoodyWoodsta ctrl+k ctrl+i works for me: I position the cursor at the position I'm interested in and press ctrl+k crtl+i on windows: Perhaps there's something more to it? (to get Cannot read property 'startShowingAt' of undefined ? After looking around, it appears that the keybind works after setting editor.hover to false but only up until you reload the window. After reloading the window, the error shows. Setting editor.hover back to true only reflects when reloading the window again. In my opinion, editor hovers could benefit from having more configuration. For example, hovers on linter errors are incredibly helpful (required, in fact, unless you want to keep opening the errors pane), but html tag info: not so much. Typescript-like info in a normal Javascript file is also a question mark in my mind. It is unfortunate that it is an all-or-nothing choice (and an unofficial one too). I am unsure of the mechanism behind hovers, but if this is something worth discussing further, then should a new issue be opened? @alexandrudima I am on mac and supossed to press ctrl + J. Whenever I do set the error-message pops up. Might be a mac issue? @WoodyWoodsta I totally agree on that. I use flow a lot and hovering is really helpful in those cases. but most of the time - I dont need the popups. I use flow a lot and hovering is really helpful in those cases. but most of the time - I dont need the popups. I'm in the same boat, any solution to this? I also get Cannot read property 'startShowingAt' of undefined See #32786 "editor.hover": false doesn't work anymore. I really wish it did. These menus flying all over the place are super annoying and always cause problems with selecting text on the line above. This is super annoying. "editor.hover": false, does work and requires a restart of VS Code, but it also disables ES Lint. I can't stand it and I can't live without it :( @amackintosh Problems can also be viewed in the problems panel (View > Problems) or in the editor via F8/Shift-F8 Thanks, but that is not efficient. I want every tool tip except this one: The fastest possible solution would be to increase the hover time to like 500ms, because it comes up pretty much instantly and prevents selecting text that is above the text that the mouse passes through. @amackintosh The last screenshot is of parameter hints. editor.parameterHints: false. As for the type information hover, perhaps @mjbvz can transmit or capture the feedback to the TS team that the type information becomes useless in JavaScript when the type requires more than 100 chars to convey (i.e. is very complex or "not-nameable"). Yes we have a few issues tracking improving display of complex types: https://github.com/Microsoft/TypeScript/issues/1510 https://github.com/Microsoft/TypeScript/issues/13095 https://github.com/Microsoft/TypeScript/issues/8134
GITHUB_ARCHIVE
UNO! that was my favorite game in MSN, if you cant i select Chess UNO is fun, i can agree, however i will have to put that against Reason, #1, (copyrighted game, so, thanks for the idea, but no.) Chess, wait didn’t I already say that? yellows111, Creator of Thread. uhh, an game that 2 players have to collect balls and put them on an bucket, who has most, wins, thats good? bucket collectors, yeah thats a name i just made up, however that name already makes the theme of the game: apples fall from a random part of the tree, the background is a forest setting, and you use the mouse to move the bucket, points are on the top of the game and they work like this: (User1 (Amount of Wins by User1) - (Amount of Wins by User2) User2), under thats theres the current amount of apples ONLY you have collected (Prevents repeated looking). Yes, i do agree, it will be on the “hey, thats pretty good” list how did i forget that “rock paper scissors” exists? also, how about the MSN Games portal?, i’ve already got it working. however, it does require its “special” MsgrP2P.xml for now, how many stuff is there right now? TTT, FS, (Private Beta) Reversi, and now the MSN Games Portal MSN Acts.zip (incudes Tic-Tac-Toe, File Sharing & now… the MSN Games portal) (2.0 KB) (UPDATE: A NEWER VERSION (of one or more Activities) IS BELOW) Follow instructions (info.txt) to install desired game/activty. you cannot have more than one MsgrP2P.xml-based app active at once. NOTE: the other contact needs the exact version of the game (MsgrP2P.xml) launch file, otherwise you will get this message “(name) has canceled the request to start (Game Name) because the activty is not availble to him/her.” i mean, balls are falling on the side, the players need to collect much as fast as possible to win, who has more, wins. I’m sorry but when i tried to understand your reply, I did not know what you meant, please can you give a clearer description? what i get is: "balls are falling “on the side” the (2) player need to collect them as fast as they can, (i know what you meant, however i forgot to incude “the apples rot when they hit the ground”), to win, who has more points, wins (this is how i intended the winning to be anyway) also new update for the portal, however its manual patching only, what you have to do is change MaxUsers, and MinUsers, to 1, that’s all. here is the code for a guessing game i found online title Guessing Game set /a guessnum=0 set /a answer=%RANDOM% echo Welcome to the Guessing Game! echo Try and Guess my Number! set /p guess= if %guess% GTR %answer% ECHO Lower! if %guess% LSS %answer% ECHO Higher! if %guess%==%answer% GOTO EQUAL set /a guessnum=%guessnum% +1 if %guess%==%variable1% ECHO Found the backdoor hey?, the answer is: %answer% echo Congratulations, You guessed right!!! echo It took you %guessnum% guesses. what is the portal isn’t that A Command Script?, I want JS/HTML. The MSN Games portal, is a service of MSN that still exists, also I think it was in WLM, as “Games” at one point, so I made this and now that function’s back. Also for downtimes, I’ve also got them, so that may be true. Also, speaking of guessing games: New Idea: (advanced idea list) Guess the number! Desc: "is today your lucky day? If you think so, Guess the number!" so, you type a number, then, the RNGed number, if it’s is correct, win, if it is not this happens: if the number is higher then specified: Tell player that the number is higher, if lower, tell that the number is lower. Should not be too hard, right? THAT’S EXACTLY WHAT THE CODE WAS.OOPS CAPS LOCK i thought i turned caps lock off when i typed oops caps lock we killed the fourm no i did not i was having fun over at the SES thread also here are the programming languages you can use: - Active Server Pages (ASP) or Microsoft ASP.NET - Microsoft Visual Basic, Scripting Edition (VBScript) - HTML or Extensible HTML (XHTML) - C, C++, or C# - XML or Extensible Stylesheet Language (XSL) - Flash (ShockWave) - Common gateway interface (CGI) - Internet Server Application Programming Interface (ISAPI) - Cold Fusion what about the cringe that is scratch. just kidding. that would be worse than the originals I wouldn’t call Scratch cringy, it just aims for a certain demographic.
OPCFW_CODE
How are you all ? Well, I’ve been struggling with this a few days and I can’t think in any way to solve it. I have this function that generates a TEMP TABLE and I wish to query this table after it has been generated. The key aspect is that my TEMP TABLE doesn’t has a fixed number of columns, it can vary. I’m using Postgres 9.5+. Any tips ? =/ Thanks in advance. I forgot to mention that I could do that on Postgres, but I couldn’t do it on Metabase: select * from temp_table; Are the data types of each column always the same? Are you using all the columns in Metabase? Are the column names always the same? If so, you could use a custom query. If not, it all looks like a really bad idea and you should write a different function. Data types is always the same, but the number of columns can vary. I’m using all columns from table in Metabase. Column names always will vary. it all looks like a really bad idea and you should write a different function. As I thought. Can you see any way out to customize columns name ? Or to run a dynamic SQL ? Thank you for your reply! =) How would Metabase know that there were more columns this time than last? Can you give a couple of examples of what you’re doing? How would Metabase know that there were more columns this time than last? This is the question. My main idea was to simply “select * from temp_table”. Imagine a structure of questions and answers. Question1, 2, 3 and so on. My goal is to divide all rows into columns per question. So the quantity of questions can vary. Sometimes it would be 1, 2, 3, […] questions. Now it doesn’t matter the name of the columns, I’m trying only to find a way to dynamically use only one “Question”/Report to exhibit this. Any ideas to solve this ? Thanks for your help. From a database perspective, that’s a horrible idea! You need to normalize the structure. You’re thinking of: You need to change that to the normalized: Adding extra columns for things like who was answering the questions. Actually the structure is like this second image. The case is that Question 1 and 2 is part of another Parent structure. So I need to correlate this two questions in one row. Something like this: Answer 1, for example, is the answer for each one of the questions (different answers, but same parent) and each row corresponds to one parent structure. Hope that I could express myself clearly. So the quantity of questions could vary, then I couldn’t structure a fixed number of columns in the table. I discarded the possibility to bring the question label as a column. I’m bringing it as the first row, but the only ways to do this is to work as crosstab in Postgres, crosstab2, crosstab3 and so on. Now I’m struggling in how to do this in only one “Metabase-Question”. Thank you again!! I really appreciate your help! Then your result needs to look more like this: It doesn’t matter how the data is stored. The important bit is that the resultset is easy to work with. My original table is structured as you have shown. One question/answer per row. But when I will summarize everything I need to know how the user answered that one parent structure. Imagine that I bought a T-Shirt and I answered that I want a M, Black T-Shirt. So I need to correlate this two information in one row. Then would be, in this case, two question. Which size ?, Which Color ?. I can do this using crosstab in Postgres, but the problem is that crosstab expect me to inform a return type and that is where I’m struggling, because the number of columns will vary. Theoretically I would need to do a report/nativequery per number of columns. Thanks again for your help and patience! Can you sketch how you want the answer to appear?
OPCFW_CODE
|Last updated: 21 July 2013 Errata for 'The Complete Sybase IQ Quick Reference Guide' (1st edition) No book is perfect... the following typos have been found in the The Complete Sybase IQ Quick Reference Guide (1st edition). This list can be found at http://www.sypron.nl/iqqr/errata_ed1.html. pages 19: -zo switch The table names sa_tmp_request_time and sa_tmp_request_profile should be satmp_request_time and satmp_request_profile instead (i.e. without the initial underscore). pages 36, 38, 71, 84, 125: page number for electronic supplement On the above-mentioned pages, the electronic supplement mentioned with a reference to page #10. This should be page #8. page 43: create multiplex server The syntax of create multiplex server mentions secondary-node-name, but the description speaks of server-name instead. This is about the same name, referring to the name of the new secondary server. page 68: string_rtruncation option The first line says off (=default). However, as indicated in subsequent lines, the default in IQ 15.x is on; only in pre-15 was the default off. page 69: varchar/varbinary datatypes The actual length of varchar and varbinary is not 1..n (where n is the length of the declared column), but n+1. This is because IQ stores all datatypes as fixed-width values, include varchar and varbinary which require 1 additional byte to store the actual column length. page 99: select...limit...offset... In 15.4, to use the select...limit...offset... syntax, the limit keyword must be enabled first with option reserved_keywords (i.e. set [temporary] option public.reserved_keywords = 'limit'; NB: this option is new in 15.4 and applies only to the limit keyword). page 116: transaction handling examples In the examples of transaction handling at page 116, the comments mention three times "@@trancount goes from 0 to 1". This should be "@@trancount goes from 1 to 0". page 127: case expression The description says "case can be used to implement if-then-else logic within a T-SQL statement", but this applies to Watcom SQL just as well. So "T-SQL" should be changed to "SQL". page 131: IQ limits For item "max. memory per server (32-bit)" (5th from bottom), the footnote should be (3), not (2). If you notice any further typos, please send an email to email@example.com.
OPCFW_CODE
Automated wheels building with scripts¶ Steps required to build wheels on Linux, MacOSX and Windows have been automated. The following sections outline how to use the associated scripts. On any linux distribution with docker and bash installed, running the script dockcross-manylinux-build-wheels.sh will create 64-bit wheels for both python 2.x and python 3.x in the dist directory. $ git clone https://github.com/KitwareMedical/VTKPythonPackage.git [...] $ ./scripts/dockcross-manylinux-build-wheels.sh [...] $ ls -1 dist/ vtk-8.0.0.dev20170714-cp27-cp27m-manylinux1_x86_64.whl vtk-8.0.0.dev20170714-cp27-cp27mu-manylinux1_x86_64.whl vtk-8.0.0.dev20170714-cp34-cp34m-manylinux1_x86_64.whl vtk-8.0.0.dev20170714-cp35-cp35m-manylinux1_x86_64.whl vtk-8.0.0.dev20170714-cp36-cp36m-manylinux1_x86_64.whl Download and install python from https://www.python.org/downloads/mac-osx/. Run macos_build_wheels.py to create wheels for python 3.5, 3.6 and 3.7 in the dist directory. $ git clone https://github.com/KitwareMedical/VTKPythonPackage.git $ python ./scripts/macos_build_wheels.py. $ ls -1 dist/ vtk-8.0.0.dev20170714-cp34-cp34m-macosx_10_9_x86_64.whl vtk-8.0.0.dev20170714-cp35-cp35m-macosx_10_9_x86_64.whl vtk-8.0.0.dev20170714-cp36-cp36m-macosx_10_9_x86_64.whl First, install Microsoft Visual C++ Compiler for Python 2.7, Visual Studio 2015, Git, and CMake, which should be added to the system PATH environmental variable. Open a PowerShell terminal as Administrator, and install Python: PS C:\> Set-ExecutionPolicy Unrestricted PS C:\> iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/scikit-build/scikit-ci-addons/master/windows/install-python.ps1')) In a PowerShell prompt: PS C:\Windows> cd C:\ PS C:\> git clone https://github.com/KitwareMedical/VTKPythonPackage.git VPP PS C:\> cd VPP PS C:\VPP> C:\Python27-x64\python.exe .\scripts\windows_build_wheels.py [...] PS C:\VPP> ls dist Directory: C:\VPP\dist Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 7/16/2017 5:21 PM ???????? vtk-8.0.0.dev20170714-cp27-cp27m-win_amd64.whl -a---- 7/16/2017 11:14 PM ???????? vtk-8.0.0.dev20170714-cp35-cp35m-win_amd64.whl -a---- 7/16/2017 2:08 AM ???????? vtk-8.0.0.dev20170714-cp36-cp36m-win_amd64.whl We need to work in a short directory to avoid path length limitations on Windows, so the repository is cloned into C:VPP. Also, it is very important to disable antivirus checking on the C:VPP directory. Otherwise, the build system conflicts with the antivirus when many files are created and deleted quickly, which can result in Access Denied errors. Windows 10 ships with an antivirus application, Windows Defender, that is enabled by default. To create source distributions, sdist’s, that will be used by pip to compile a wheel for installation if a binary wheel is not available for the current Python version or platform: $ python setup.py sdist --formats=gztar,zip [...] $ ls -1 dist/ vtk-8.0.0.dev20170714.tar.gz vtk-8.0.0.dev20170714.zip
OPCFW_CODE
Can the values of DimensionCount in DML_BUFFER_TENSOR_DESC be less than 4? According to the DML_BUFFER_TENSOR_DESC doc, the valid values are either 4 or 5. In DirectML, all buffer tensors must have a DimensionCount of either 4 or 5. However, according to my test, the user code can set DimensionCount less than 4. For example, the following PyDirectML code that adds two tensors in shape of [2, 2] works just fine. import pydirectml as dml import numpy as np device = dml.Device() builder = dml.GraphBuilder(device) data_type = dml.TensorDataType.FLOAT32 flags = dml.TensorFlags.OWNED_BY_DML input_bindings = [] a = dml.input_tensor(builder, 0, dml.TensorDesc(data_type, [2, 2])) input_bindings.append(dml.Binding(a, np.ones([2, 2], dtype=np.float32))) b = dml.input_tensor(builder, 1, dml.TensorDesc(data_type, flags, [2, 2])) input_bindings.append(dml.Binding(b, np.ones([2, 2], dtype=np.float32))) c = dml.add(a, b) op = builder.build(dml.ExecutionFlags.NONE, [c]) output_data = device.compute(op, input_bindings, [c]) output_tensor = np.array(output_data[0], np.float32) print(output_tensor) The output is [[2. 2.] [2. 2.]] Internally, the PyDirectML and DirectMLX.h will set the DimensionCount to 2. Other dimensions also work, like 1-d or 3-d. Actually this is a very nice feature and would simplify the user code. I just want to know whether this feature is officially supported by DirectML. If it is, the doc probably needs to be updated. @jstoecker @wchao1115 , do you have any insights? Thanks! @jstoecker @wchao1115 , do you have any insights? Thanks! Thanks for pointing this out. The statement in DML_BUFFER_TENSOR_DESC is out of date, and we'll need to correct that page to reflect some recent changes. The 4D/5D restriction used to be true, but our latest feature level (3_0) supports a wider range of values in certain operators. Moving forward, you'll want to check out the documentation for individual operator APIs to see the allowed dimension counts. For example, DML_ELEMENT_WISE_ADD_OPERATOR supports 1 to 8 dimensions in feature level 3_0. PyDirectML uses the latest DirectML redistributable (version 1.4 / feature level 3_0) so you can rely on the latest feature level when determining usage in this case. Thanks for pointing this out. The statement in DML_BUFFER_TENSOR_DESC is out of date, and we'll need to correct that page to reflect some recent changes. The 4D/5D restriction used to be true, but our latest feature level (3_0) supports a wider range of values in certain operators. Moving forward, you'll want to check out the documentation for individual operator APIs to see the allowed dimension counts. For example, DML_ELEMENT_WISE_ADD_OPERATOR supports 1 to 8 dimensions in feature level 3_0. PyDirectML uses the latest DirectML redistributable (version 1.4 / feature level 3_0) so you can rely on the latest feature level when determining usage in this case. For example, DML_ELEMENT_WISE_ADD_OPERATOR supports 1 to 8 dimensions in feature level 3_0. That is what I look for. Thanks much @jstoecker ! For example, DML_ELEMENT_WISE_ADD_OPERATOR supports 1 to 8 dimensions in feature level 3_0. That is what I look for. Thanks much @jstoecker !
GITHUB_ARCHIVE
As most of you know that the world is fighting the spread of COVID 19, most people are spending time at home. What kind of STEM projects can we do during this period. I invite all of you to reply to this thread giving ideas of projects. How about making a Contact less Thermometer? Contact-less Thermometer is a device which measures the body temperature without making any contact with the skin. It can also measure the temperature of other things such as machines, PCB, Hot milk and etc. The average cost of available units in the market ranges from Rs 2000 to Rs 6000. These units does allow to tinker around it in terms of its repair, re-calibration and upgradation. And it also seems to be a something complex and magical black box. So lets explore it via STEM project approach. Making our own Contact less Thermometer will help us to explore its inside technology, principle involved and enable us to put our own design thinking into it. I know as most of us are at home and it is difficult to fabricate it, but lets discuss about its design, requirements, science and technology involved and etc. As we have access to our laptops/desktops/Tablets and internet connectivity, lets collaborate on development of things which I termed as “Digital Pre-Fabrication”. Through this i mean the following Study of Contact-less Thermometers, their principle and technology used. Which principle and technology to be used, Defining our design parameters and application features. Sensors available and selecting the suitable one by reading datasheets as per our application. Selecting a Microcontroller or MCU Board (such as Arduino, ESP etc). Making a circuit diagram on paper. (This will include interfacing of a processing device with Inputs (Sensor, switches), outputs (small Display, leds) and a power circuitry (which must be battery powered). There may also be some communication device such as WiFi or Bluetooth as per features defined. Making a schematic and PCB design with reference to circuit diagram. (This will require to work on CAD-CAM softwares such as KiCAd : an Open Source Software to design PCBs)). Designing 3D Model of the encloser to hold PCB, Sensor , display, switches, leds and battery aesthetically. (This will also require CAD software such as FreeCad or OpenScad : Open Source Softwares to design 3D models). Defining the algorithm and writing a code specially with comments. (We cannot write the complete and perfectly working code as we are not testing it on hardware but we can write it in a generic way so that when we are fabricating the device physically this code can be easily turned into perfectly working code). Please Note the developments done up to this stage (which may be near about 65 to 75 percent of our project work done) does not require any physical fabrication facilities (such as fabricating PCBs, Soldering and testing electronics components, Optimizing code while testing physically on the device, 3D printing/laser cutting encloser and enclosing PCB and other componets into it). The device can be fabricated (may require some modification) whenever we will get access to fabrication facilities via our offices, School Labs, ATL, Makerspaces, Fab-Labs, Innovation Hubs, Community centers, Makers Club, Local Market services etc. Now coming to the learning and engagement point. if we carefully observe all the work to be done during “Digital Pre-Fabrication”, makers are engaged in experiencing the learning and boosting up their skills from the following: - Exploring the fundamentals of science and technology used behind. - Working with electronics components and designing circuit diagrams. - Working with Arduino or any other Computing platform and writing codes. - Designing PCB for their circuit diagram (exploring KiCad) - Designing 3D model for enclosure (exploring OpenScad or FreeCad). This is not only limited to this particular project but we can also think of other computational projects. Lets discuss on this and collaborate as a team to work on different modules of this project. The danger of touching face does not stop when we use a mask around the nose and mouth. I think full face protection is better. How about folding a transparency (OHP) sheet around the face, supported by a cap on the top, and a cloth at the bottom. In a humid and hot weather, this will be inconvenient. We will have to think of how to send cool air inside. So many challenges to solve. MakerAsylum did something like this: This looks cool. But unlikely protect this person or other from the virus. My thoughts are as follows: Eventually many of us will get the infection. Just as we had flu earlier, we will bear it. Let’s us use this time to create designs, technology that helps people, particularly doctors and nurses, to reduce contact and help those who are facing SARS symptoms. E.g, help make every hospital and other public places’ door to open and close without using handles. Ventilated, cool headgear, useful also for people facing polluted roads. Work on prosumer net, to ensure production and supply chain does not get broken even in situations like a pandemic. And discuss as much as possible on online sites to create knowledge as we live. Preventing pandemics, like natural disasters, is not possible. What is possible is to educate everyone to face a pandemic and reduce such effects in future. Let’s think long-term, while we continue to deal with the short-term goals. Let’s make sans-contact everything, starting with a thermometer. Hello everyone I have started the thread for Development of Contact-Less Thermometer, please find the link below. Looking forward for all of yours collaboration. Happy Development ! I am a class 11th student, how can i help you I am learning React Native JS while working on some application to manage grocery shopping in our locality without crowding during the lockdown. A simple tutorial is here, you can give it a go: This in turn will help me to progress with a browser based learn FFT from scratch by tinkering tool. I’m sure for anybody who wishes to start with browser programming and is okay facing git bash will get a good hands on experience if you would like to collab. Let me know via a reply here! Hi @aakash_mylife, we also want to design Facemask/shield, can you start thinking on the design of this Facemask, which should cover entire face but will not create any problem for breathing, easy to wear and also can be made easily with local component available with us. Please post your diagrams, thoughts in this thread, let’s collaboratively work on this for the betterment of society.
OPCFW_CODE
Managing Wireless Network Connections Connect to your home wireless network, log on to the wireless network at work, go to a meeting and borrow the free Wi-Fi connection at the hotel, grab lunch and your emails at the local cafe, stop by the library and grab some Dewey decimals out of the ether—if you save your connections for reuse, you’ll have a pile of wireless connections at the end of the day (or maybe as early as your morning break!). To sort out which connections live, which die, and which one’s top dog in your wireless universe, click the Manage Wireless Networks link from the task pane to open the dialog shown in Figure 12.8 A. The topmost connection is the one Windows Vista will use first. If you prefer a different connection, select it, and click Move Up. You can also move a connection down. General properties for the selected connection are displayed at the bottom of the dialog. To view detailed properties, click Adapter Properties to open a UAC-protected tabbed dialog (see Figure 12.8 B). Compared to Windows XP’s wireless network adapter properties, Windows Vista wireless adapters are configured with both IP version 4 and IP version 6 and also feature both the LLTD mapper I/O driver (used to map the network) and the LLTD responder (responds to signals from the mapper I/O driver). By default, wireless connections are set up for use by all users. To enable the option to set up per-user connections, click Profile Types to open the Wireless Network Profile Type dialog shown in Figure 12.8 C. If you choose the all-user/per-user option, you will be asked on subsequent connections if the connection is for all users or for the current user only. Wireless connection management is a lot easier in Windows Vista than in Windows XP. Figure 12.8 Managing wireless connections. Troubleshooting Your Network Connection Although the Network and Sharing Center includes a Diagnose and Repair task, it’s not the only tool you have to figure out what ails your network connection. The graphic at the top of the Network and Sharing Center (refer to Figure 12.4) gives you a clear idea of what’s going on: If there’s an X marking across either the connection to the Internet or to your router, you’ve lost your connection. Check the following: - For USB-based adapters, check the USB cable. Disconnect the adapter from the port or cable, wait a few moments, and plug it in again. - For removable adapters (such as CardBus or ExpressCard adapters on laptops), eject the wireless adapter and plug it in again. - For internal adapters (such as integrated or PCI card adapters), use the Wireless Network Connection Status dialog shown in Figure 12.9 A to disable and enable the adapter. - If the connection break is between the router and the Internet, check the WAN connection between the router and your broadband device (such as a cable or DSL modem). If the cable connection looks okay, disconnect power to the router, wait a few moments, and power it up again. - If the broadband device’s signal lights indicate a problem, disconnect power to the device, wait a few moments, and power it up again. You may need to wait several minutes for the device to resynchronize. If your connection is working but seems to be slow, it’s time to check network status. If you’re running a wireless connection, count the number of bars (just like in the TV commercial, more is better). For more information about either a wired or wireless connect, click View Status. Figure 12.9 A shows a typical status dialog for a healthy wireless connection, and Figure 12.9 B shows a typical status dialog for a healthy Ethernet connection. Click the Details button for more information (see Figure 12.9 C). Figure 12.9 Viewing connection status. If you can’t find an obvious problem with your connection, but it’s either completely out to lunch or acting as if it’s transmitting molasses in January, click the Diagnose button on the status dialog or click Diagnose and Repair from the Task menu. If you prefer to use advanced command-line stalwarts such as Ping, Tracert, and Netsh to troubleshoot your network, they’re still around. Open a command prompt window to use them.
OPCFW_CODE
Core Text - select text in iPhone? I need to render rich text using Core Text in my view (simple formatting, multiple fonts in one line of texts, etc.). I am wondering if text rendered this way can be selected by user using (standard copy / paste function)? it would be nice if @javsmo answer was selected instead of the current one. It has 7 up votes I know, it is a very nice answer but it showed few months after selected answer (which also answered my question). I am not sure if this would be fair... since the original responder deleted it's answer I implemented a text selection in CoreText. It is really a hard work... But it's doable. Basically you have to save all CTLine rects and origins using CTFrameGetLineOrigins(1), CTLineGetTypographicBounds(2), CTLineGetStringRange(3) and CTLineGetOffsetForStringIndex(4). The line rect can be calculated using the origin(1), ascent(2), descent(2) and offset(3)(4) as shown bellow. lineRect = CGRectMake(origin.x + offset, origin.y - descent, offset, ascent + descent); After doing that, you can test which line has the touched point looping the lines (always remember that CoreText uses inverse Y coordinates). Knowing the line that has the touched point, you can know the letter that is located at that point (or the nearest letter) using CTLineGetStringIndexForPosition. Here's one screenshot. For that loupe, I used the code shown in this post. Edit: To draw the blue background selection, you have to paint the rect using CGContextFillRect. Unfortunately, there's no background color in NSAttributedString. Hi javsmo, your example looks brilliant! Any chance you could send me some source code? Many thanks. Tom Thanks, @TomTom . My code to draw text and do the selections with CoreText has almost 500 lines. It's complicated to post it without any explain. Which part of selection are you having trouble with? Hi javsmo 。you are so great,I hope you share to us.maybe use github and so on....Thanks... @PareshNavadiya do you have any sample, i can add magnifier view only, after that i dont have any idea? this is 8 years ago but did you draw the handles or make those layers and just update the rects? I am implementing something similar to this right now. I have the drawing done but not the handles and their movements. Everything was drawn in that case, including the handles. In the end it was simply a vertical line with two concentric circles. The app still uses that code, but I believe there should be something already written to solve this problem.
STACK_EXCHANGE
Table of Contents - 1 What is the limit of URL length? - 2 What is the max size of HTTP request? - 3 How do I reduce the size of a URL? - 4 What is the size limit of a post request? - 5 What is difference between GET and POST? - 6 What is the difference between a display URL and a landing page URL? - 7 What’s the character limit for a LinkedIn message? - 8 Which is the first part of the request line? What is the limit of URL length? Google Chrome allows the maximum length of the URL to be of the size 2MB(2048 characters). In Firefox the length of the URL can be unlimited but practically after 65,536 characters the location bar no longer displays the URL. What is the max size of HTTP request? The default value of the HTTP and HTTPS connector maximum post size is 2MB. However you can adjust the value as per your requirement. The below command to set the connector to accept maximum 100,000 bytes. If the http request POST size exceeds the 100,000 bytes then connector return HTTP/1.1 400 Bad Request. Which method is restricted to send up to 1024 characters only? The GET method The GET method is restricted to send upto 1024 characters only. Never use GET method if you have password or other sensitive information to be sent to the server. What is the character limit of a destination URL? For example, destination URLs can go up to 2048 characters versus the standard 35 character limitation on display URLs. Folders. Display URLs can show folders. As long as the display URL stays under the limit for the number of characters, folders can be added to the root domain name. How do I reduce the size of a URL? How to shorten a URL using TinyURL - Copy the URL you want to shorten. - Open TinyURL in your web browser. - Paste the URL into the “Enter a long URL to make tiny” field. - If you would like your shortened URL to include a specific phrase, enter that in the “Custom alias” field. - Click “Make TinyURL!” What is the size limit of a post request? By default, the post request has maximum size of 8mb. But you can modify it according to your requirements. The modification can be done by opening php. ini file (php configuration setting). Does get have a size limit? If you are using the GET method, you are limited to a maximum of 2,048 characters, minus the number of characters in the actual path. DOES GET method have size limitation? If you are using the GET method, you are limited to a maximum of 2,048 characters, minus the number of characters in the actual path. However, the POST method is not limited by the size of the URL for submitting name and value pairs. What is difference between GET and POST? Both GET and POST method is used to transfer data from client to server in HTTP protocol but Main difference between POST and GET method is that GET carries request parameter appended in URL string while POST carries request parameter in message body which makes it more secure way of transferring data from client to … What is the difference between a display URL and a landing page URL? To review, a display URL is the green URL that appears directly below your ad headline on search engine results pages. It typically resembles your site’s homepage URL, but it’s not hyperlinked. A final URL, aka the post-click landing page URL, is the actual web address of the page people reach when they click your ad. How do I create a free URL? Use Google Sites to create your free URL. You can create multiple websites under a single Google account and select a unique address for each one. Use one of Google’s layout templates or create your own using the HTML editor. How many unreserved characters are allowed in get parameter? There are 66 unreserved characters that doesn’t need any encoding: abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_.~ There are 18 reserved characters which needs to be encoded: !*’ ();:@&=+$,/?# , and all the other characters must be encoded. What’s the character limit for a LinkedIn message? You probably already know that LinkedIn has a character limit when sending connection requests. For connection messages, the character limit is 300 characters. And these characters include spaces and all characters (letters, numbers, symbols, and even emojis).🤔 Which is the first part of the request line? The Request-Line begins with a method token, followed by the Request-URI and the protocol version, and ending with CRLF. The elements are separated by space SP characters. Let’s discuss each of the parts mentioned in the Request-Line. The request method indicates the method to be performed on the resource identified by the given Request-URI. What’s the maximum size of an HTTP request? Usually up to around 2GB is allowed by the average webserver. This is also configureable somewhere in the server settings. The average server will display a server-specific error/exception when the POST limit is exceeded, usually as HTTP 500 error.
OPCFW_CODE
Memory leak with tf.data I'm creating a tf.data.Dataset inside a for loop and I noticed that the memory was not freed as one would expect after each iteration. Is there a way to request from TensorFlow to free the memory? I tried using tf.reset_default_graph(), I tried calling del on the relevant python objects but this does not work. The only thing that seems to work is gc.collect(). Unfortunately, gc.collect does not work on some more complex examples. Fully reproducible code: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import psutil %matplotlib inline memory_used = [] for i in range(500): data = tf.data.Dataset.from_tensor_slices( np.random.uniform(size=(10, 500, 500)))\ .prefetch(64)\ .repeat(-1)\ .batch(3) data_it = data.make_initializable_iterator() next_element = data_it.get_next() with tf.Session() as sess: sess.run(data_it.initializer) sess.run(next_element) memory_used.append(psutil.virtual_memory().used / 2 ** 30) tf.reset_default_graph() plt.plot(memory_used) plt.title('Evolution of memory') plt.xlabel('iteration') plt.ylabel('memory used (GB)') The issue is that you're adding a new node to the graph to define the iterator after each iteration, a simple rule of thumb is never define new tensorflow variables inside a loop. To fix it move data = tf.data.Dataset.from_tensor_slices( np.random.uniform(size=(10, 500, 500)))\ .prefetch(64)\ .repeat(-1)\ .batch(3) data_it = data.make_initializable_iterator() next_element = data_it.get_next() outside the for loop and just call sess.run(next_element) to fetch the next example and once youve gone through all the training/eval examples call sess.run(data_it) to reinitialize the iterator. This fix worked for me when I had a similar issue with TF 2.4 sudo apt-get install libtcmalloc-minimal4 LD_PRELOAD=/path/to/libtcmalloc_minimal.so.4 python example.py Any tips if this only works on every 10th run? This only partially fixes the issue. I have a shuffle in my pipeline and that still causes a steady increase in allocated memory, albeit at a much slower rate with this fix. If you only need to create then save the dataset to disk in the loop body, such as when wanting to preprocess large amounts of data in smaller parts to avoid out-of-memory, launch the loop body in a subprocess. This answer describes how to launch subprocesses in general. Dataset API handles iteration via built-in iterator, at least while eager mode is off or TF version is not 2.0. So, there's simply no need to create dataset object from numpy array inside for loop, as it writes values in the graph as tf.constant. This is not the case with data = tf.data.TFRecordDataset(), so if you transform your data to tfrecords format and run it inside for loop it won't leak memory. for i in range(500): data = tf.data.TFRecordDataset('file.tfrecords')\ .prefetch(64)\ .repeat(-1)\ .batch(1) data_it = data.make_initializable_iterator() next_element = data_it.get_next() with tf.Session() as sess: sess.run(data_it.initializer) sess.run(next_element) memory_used.append(psutil.virtual_memory().used / 2 ** 30) tf.reset_default_graph() But as I said, there's no need to create dataset inside a loop. data = tf.data.Dataset.from_tensor_slices( np.random.uniform(size=(10, 500, 500)))\ .prefetch(64)\ .repeat(-1)\ .batch(3) data_it = data.make_initializable_iterator() next_element = data_it.get_next() for i in range(500): with tf.Session() as sess: ... I'm familiar with the tf.data API, my question is on a different point, i.e. explicitly freeing memory allocated by TensorFlow with tf.data. Take a loop at this, it specifically points out why it's not a good idea to try to free memory https://github.com/tensorflow/tensorflow/issues/14181 You are creating new python object (dataset) ever iteration of a loop and looks like garbage collector is not being invoked. Add impplicit garbage collection call and the memory usage should be fine. Other than that, as mentioned in other answer, keep building data obect and session outside of the loop. import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import psutil import gc %matplotlib inline memory_used = [] for i in range(100): data = tf.data.Dataset.from_tensor_slices( np.random.uniform(size=(10, 500, 500)))\ .prefetch(64)\ .repeat(-1)\ .batch(3) data_it = data.make_initializable_iterator() next_element = data_it.get_next() with tf.Session() as sess: sess.run(data_it.initializer) sess.run(next_element) memory_used.append(psutil.virtual_memory().used / 2 ** 30) tf.reset_default_graph() gc.collect() plt.plot(memory_used) plt.title('Evolution of memory') plt.xlabel('iteration') plt.ylabel('memory used (GB)') Sorry, I've just noticed that you did write that you tried gc.collect(). But what would be the more complex usecase? Yes, I tried gc.collect(). My more complex use case involves many *.tfrecord files with a quite complex data pipeline. In this more complex use case, gc.collect() does not work. Thus my question: how to explicitly free the memory allocated by TensorFlow. I read somewhere about it and in general: with a GPU memory it is not possible to free it, with Python it has to be gc.collect() but then you have numerous typical issues with Python that does not want to free the memory - there is a big threat about it on stackoverflow. Here, I’m referring to the general RAM, not the memory of the GPU. I think you need to describe your usecase more precisely. Like do you need to run through all those different TFRecord files at once or not?
STACK_EXCHANGE
/* Generated from nbformat-v3.schema.json by nbformat.js. Do not edit. */ /* eslint-disable camelcase, no-use-before-define, @typescript-eslint/no-explicit-any */ export type Output = Pyout | DisplayData | Stream | Pyerr /** * IPython Notebook v3.0 JSON schema. */ export interface Notebook { /** * Notebook root-level metadata. */ metadata: { /** * Kernel information. */ kernel_info?: { /** * Name of the kernel specification. */ name: string /** * The programming language which this kernel runs. */ language: string /** * The codemirror mode to use for code in this language. */ codemirror_mode?: string [k: string]: unknown } /** * Hash of the notebook. */ signature?: string [k: string]: unknown } /** * Notebook format (minor number). Incremented for backward compatible changes to the notebook format. */ nbformat_minor: number /** * Notebook format (major number). Incremented between backwards incompatible changes to the notebook format. */ nbformat: number /** * Original notebook format (major number) before converting the notebook between versions. */ orig_nbformat?: number /** * Original notebook format (minor number) before converting the notebook between versions. */ orig_nbformat_minor?: number /** * Array of worksheets */ worksheets: Worksheet[] } export interface Worksheet { /** * Array of cells of the current notebook. */ cells: (RawCell | MarkdownCell | HeadingCell | CodeCell)[] /** * metadata of the current worksheet */ metadata?: { [k: string]: unknown } } /** * Notebook raw nbconvert cell. */ export interface RawCell { /** * String identifying the type of cell. */ cell_type: 'raw' /** * Cell-level metadata. */ metadata?: { /** * Raw cell metadata format for nbconvert. */ format?: string /** * The cell's name. If present, must be a non-empty string. */ name?: string /** * The cell's tags. Tags must be unique, and must not contain commas. */ tags?: string[] [k: string]: unknown } /** * Contents of the cell, represented as an array of lines. */ source: string | string[] } /** * Notebook markdown cell. */ export interface MarkdownCell { /** * String identifying the type of cell. */ cell_type: 'markdown' | 'html' /** * Cell-level metadata. */ metadata?: { /** * The cell's name. If present, must be a non-empty string. */ name?: string /** * The cell's tags. Tags must be unique, and must not contain commas. */ tags?: string[] [k: string]: unknown } /** * Contents of the cell, represented as an array of lines. */ source: string | string[] } /** * Notebook heading cell. */ export interface HeadingCell { /** * String identifying the type of cell. */ cell_type: 'heading' /** * Cell-level metadata. */ metadata?: { [k: string]: unknown } /** * Contents of the cell, represented as an array of lines. */ source: string | string[] /** * Level of heading cells. */ level: number } /** * Notebook code cell. */ export interface CodeCell { /** * String identifying the type of cell. */ cell_type: 'code' /** * The cell's language (always Python) */ language: string /** * Whether the cell is collapsed/expanded. */ collapsed?: boolean /** * Cell-level metadata. */ metadata?: { [k: string]: unknown } /** * Contents of the cell, represented as an array of lines. */ input: string | string[] /** * Execution, display, or stream outputs. */ outputs: Output[] /** * The code cell's prompt number. Will be null if the cell has not been run. */ prompt_number?: number | null } /** * Result of executing a code cell. */ export interface Pyout { /** * Type of cell output. */ output_type: 'pyout' /** * A result's prompt number. */ prompt_number: number text?: string | string[] latex?: string | string[] png?: string | string[] jpeg?: string | string[] svg?: string | string[] html?: string | string[] javascript?: string | string[] json?: string | string[] pdf?: string | string[] /** * Cell output metadata. */ metadata?: { [k: string]: unknown } /** * mimetype output (e.g. text/plain), represented as either an array of strings or a string. * * This interface was referenced by `Pyout`'s JSON-Schema definition * via the `patternProperty` "^[a-zA-Z0-9]+/[a-zA-Z0-9\-\+\.]+$". */ [k: string]: any } /** * Data displayed as a result of code cell execution. */ export interface DisplayData { /** * Type of cell output. */ output_type: 'display_data' text?: string | string[] latex?: string | string[] png?: string | string[] jpeg?: string | string[] svg?: string | string[] html?: string | string[] javascript?: string | string[] json?: string | string[] pdf?: string | string[] /** * Cell output metadata. */ metadata?: { [k: string]: unknown } /** * mimetype output (e.g. text/plain), represented as either an array of strings or a string. * * This interface was referenced by `DisplayData`'s JSON-Schema definition * via the `patternProperty` "[a-zA-Z0-9]+/[a-zA-Z0-9\-\+\.]+$". */ [k: string]: any } /** * Stream output from a code cell. */ export interface Stream { /** * Type of cell output. */ output_type: 'stream' /** * The stream type/destination. */ stream: string /** * The stream's text output, represented as an array of strings. */ text: string | string[] } /** * Output of an error that occurred during code cell execution. */ export interface Pyerr { /** * Type of cell output. */ output_type: 'pyerr' /** * The name of the error. */ ename: string /** * The value, or message, of the error. */ evalue: string /** * The error's traceback, represented as an array of strings. */ traceback: string[] }
STACK_EDU
Google Tech TalksMarch, 25 2008ABSTRACTThis talk is about discovering and modeling previously unspecified, recurring themes in a given set of arbitrary images. Given a set of images, each containing frequent occurrences of objects from multiple categories, the goal is to learn a compact model of the categories as well as their relationships, for the purposes of later recognizing/segmenting any occurrences in new images. Categories are not defined by the user. Also, whether and where instances of any categories appear in a specific image is not known. This problem is challenging, since it involves the following unanswered questions. What is an object category? What image properties should be used and how to combine them to discover category occurrences? What is an efficient multicategory representation?We will examine a methodology, developed during my postdoctoral work at UIUC. Each image is represented by a segmentation tree whose nodes correspond to image regions, segmented at all natural scales present, and edges between tree nodes capture the region embedding. The presence of any categories in the image set is then reflected in the frequent occurrence of similar subtrees within the segmentation trees. Our methodology is designed to: (1) match image trees to find similar subtrees; (2) discover categories by clustering similar subtrees, and use the properties of each cluster to learn the model of the associated category; and (3) learn the grammar of the discovered categories that compactly captures their recursive definitions in terms of other simpler (sub)categories and their relationships (e.g., containment, co- occurrence, and sharing of simple categories by more complex ones). When a new image is encountered, its segmentation tree is matched against the learned grammar to simultaneously recognize and segment all occurrences of the learned categories. This matching also provides a semantic explanation of object recognition in terms of the identified parts along with their spatial relationships.The aforementioned methodology can also be used for identifying recurring image themes of more general kind. An example is that of extracting the stochastically repeating, elementary parts of image texture (e.g., waterlilies on the water surface, people in a crowd).This talk will be taped by the engEDU Tech Talks Team. Speaker: Sinisa TodorovicSinisa Todorovic received the joint B.S./M.S. degree with honors in electrical engineering from the University of Belgrade, Serbia, in 1994. From 1994 until 2001, he worked in the communications industry. He received the M.S. and Ph.D. degrees in electrical and computer engineering at the University of Florida, Gainesville, in 2002, and 2005, respectively. Since 2005, he holds the position of Postdoctoral Research Associate in the Beckman Institute at the University of Illinois Urbana-Champaign, where he collaborates with Prof. Narendra Ahuja. Sinisa's main research interests concern computer vision and machine learning, with current focus on unsupervised extraction and representation of visual themes recurring in images. He is the recipient of Jack Neubauer Best Paper Award 2004 for a publication in IEEE Trans. Vehicular Technology, and Outstanding Reviewer Award at the Int. Conf. on Computer Vision (ICCV) 2007. He serves as Associate Editor of Advances in Multimedia. Questions about What Do Those Images Have In Common? Want more info about What Do Those Images Have In Common?? Get free advice from education experts and Noodle community members.
OPCFW_CODE
Systems with both variable temperature and variable relative humidity For systems that experience both temperature and relative-humidity variability, the calculations become more complex, but still can be dealt with by MKT and MKRH approaches. To illustrate one complicating factor, two similar, hypothetical scenarios are considered. In both scenarios, the same hypothetical solid-state pharmaceutical product is exposed to identical temperatures and relative humidity conditions, but in Scenario 1, the high temperature coincides with high relative humidity, and in Scenario 2, the high temperature coincides with low relative humidity, as shown in Table I. Table I: Hypothetical scenarios for systems with both temperature and relative-humidity variability. As a consequence of the exponential nature of the humidity-corrected Arrhenius equation, Scenario 1 is more stressful to the product than Scenario 2. For instance, if the hypothetical product had an Ea of 120 KJ·mol-1 and a B term of 0.04 (see later for a discussion on average typical values for Ea and B), Scenario 1 would result in three-fold more degradation than in Scenario 2, and yet both scenarios have identical average temperature and humidity values (i.e., identical MKT, MKRH, arithmetic mean temperature, and arithmetic mean humidity). This illustrates how, in situations in which both temperature and humidity are varying, it can be misleading to calculate the MKT or MKRH in isolation of the other. In order to express situations with both varying temperature and humidity as a single, constant temperature and humidity condition, it is first important to recognize that there is a continuum of constant temperature and humidity conditions that would result in the same amount of degradation as the varying conditions, because any increase in the constant "average" temperature can be compensated by a decrease in the constant "average" relative humidity. Any temperature can be chosen, therefore, within the variable temperature range, and a corresponding MKRH can be calculated using Equation 6: where MKRH = mean kinetic relative humidity (%) calculated for a given temperature, Tchoice, Tchoice = a chosen constant temperature (K), T 1 to T n = the variable temperature (K) measured at constant intervals, RH 1 to RH n = the variable relative humidity (%) measured at constant intervals, and n = the number of temperature and relative humidity measurements. Alternatively, any relative humidity can be chosen (within the variable relative humidity range), and a corresponding MKT can be calculated using Equation 7: where MKT = mean kinetic temperature (K) calculated for a given relative humidity, RHchoice, and RHchoice = a chosen constant relative humidity (%). Despite their apparent complexity, these equations are relatively easy to apply using spreadsheet software. One potential problem with this approach is that there is a degree of arbitrariness when choosing Tchoice (or RHchoice). If this approach is used, then the use of the MKT calculated without consideration of the variable relative humidity (e.g., according to USP <1150>) should be a good choice for Tchoice in most situations, and the associated MKRH can be calculated using Equation 7 (4). Similarly, the use of the MKRH calculated without consideration of the variable temperature should be a good choice for RHchoice, and the associated MKT can be calculated using Equation 7. Figure 1 shows the MKT and MKRH combinations calculated for Scenario 1 and Scenario 2 using this 'combined' approach to temperature and relative humidity variations. As shown in Figure 1, the use of MKT and MKRH calculated in isolation of each other overestimates the degradation for Scenario 2 but underestimates the degradation for Scenario 1, and the use of arithmetic mean temperature and humidity significantly underestimates the degradation for Scenario 1 and even underestimates Scenario 2. Figure 1: The solid lines represent the temperature-relative humidity combinations that are calculated to result in the same amount of degradation as the variable temperature and relative humidity in Scenarios 1 and 2. These lines were calculated for a product with an activation energy (Ea) of 120 KJ·mol-1 and a B term of 0.04, which represent average values for solid-state pharmaceutical products. (ALL FIGURES ARE COURTESY OF THE AUTHOR)
OPCFW_CODE
IdToken missing from AccountInfo object on page reload Core Library MSAL.js (@azure/msal-browser) Core Library Version 2.37.0 Wrapper Library MSAL Angular (@azure/msal-angular) Wrapper Library Version 2.5.7 Public or Confidential Client? Public Description Unable to retrieve id token from account object (needed for logout) after reloading the page. Logout Scenario: Navigate to setting page (which is guarded by MsalGuard) Then reload the page invoke msal auth service getActiveAccount() method (id token is missing in account object here) currently using logoutRedirect() with idtokenhint as 'Require ID token in logout requests in Azure B2C user flow' is configured to true. Log after reloading the page. It seems like it's failing because of the authority metadata stored in memory storage which gets cleared on page reload and the authority meta data is not discovered until acquireToken get called? MSAL Configuration { auth: { "clientId": "client-id", "authority": "https://domain.b2clogin.com/domain.onmicrosoft.com/signin_policy_name", "redirectUri": "http://localhost:4200/auth", "postLogoutRedirectUri": "http://localhost:4200/", "knownAuthorities": [ "domain.b2clogin.com" ] }, "cache": { "cacheLocation": "localStorage", "storeAuthStateInCookie": false }, "guard": { "interactionType": "redirect", "authRequest": { "scopes": [] }, "loginFailedRoute": "/auth-failed" }, "interceptor": { "interactionType": "redirect", "protectedResourceMap": { "https://apidomain.azurewebsites.net/api/": [ "https://domain.onmicrosoft.com/ApiName/RouteName" ] } } } Relevant Code Snippets logout(): { const account = this.msalService.instance.getActiveAccount(); this.msalService.logoutRedirect({ account: account, idTokenHint: account.idToken }); } Identity Provider Azure B2C Basic Policy Source External (Customer) @sunil-ems Can you confirm that the idToken is returned in the response before reload? AuthorityCache is in-memory and does not sustain page reloads but the account object cached should have had idToken cached already in the first place. Yes, but there should be a config environment variable that matches the lookup. This looks like a potential bug. Can you share a simple code for sample repro on our end? Hi @sameerag, please refer this sample example of profile page to reproduce the issue. If you navigate to profile page after login you'll get the idToken but not after reloading the profile page. @Component({ selector: 'app-profile', templateUrl: './profile.component.html', styleUrls: ['./profile.component.css'] }) export class ProfileComponent implements OnInit { idToken?: string; constructor( private authService: MsalService ) { } ngOnInit() { const account = this.authService.instance.getActiveAccount(); if(account) { this.idToken = account.idToken; } } } Update the profile html file to this: <div>ID Token: {{ idToken || 'Not Found'}}</div> I'm also experiencing this with msal-browser 2.37.1. I'm trying to comply with a B2C logout configuration that requires the idTokenHint in a situation where the browser page is reloaded and\or cache is dropped. When I call getAllAccounts() or getAccountByHomeId, the AccountInfo object returned does not contain an idToken value. I can see the cached token in Local storage, but the library does not fetch it. I have a temporary workaround that's working, but it appears there is a long history of similar issues (see #5053) , and my expectation is the AccountInfo object should contain this value, fetched from cache internally by the library. protected getIdTokenFromCache(homeAccountId: string): string | null { if(!this.msalInstance) { this.logger.error("MSAL is not Initalized"); return null; } const cache = this.msalInstance.getTokenCache(); // @ts-ignore // This isn't exposed by MSAL.js type definitions const idTokenEntity = cache.storage.getIdToken(homeAccountId); return idTokenEntity?.secret; } Are my expectations wrong, that when I select an account something like getAllAccounts() that it would contain the cached idToken as a property on the account? Or is this a bug? @sunil-ems and @MattWatt let me get back once I get a chance to reproduce this. Marking it as a bug. This is expected behavior (for now) as the authority metadata is required in order to do the token lookup and we can't do the metadata retrieval in our account APIs because it's an async operation, it only works if the data is already there (which it's not on a fresh page load). The idToken was added to the AccountInfo object as a convenience feature but is done by best effort, it's not guaranteed. If you need the idToken acquireTokenSilent is the best way to retrieve it. We will be making some improvements to this in our v3 release where we will pre-fetch the metadata during initialization but that work has not been prioritized yet. Thanks @tnorling for the update. @MattWatt and @sunil-ems closing this as we cannot fix this in v2. We will try improving this experience in the next iteration. Please also note that perf for the call acquireTokenSilent should be equivalent to fetching from cache as long as the token is cached.
GITHUB_ARCHIVE
These industry experts will coach you regarding how to simplify sophisticated jobs conserving you lots of time, frustrations, and confusion. They'll also determine your mistakes and help you appropriate them promptly. These effortless issues help the trainees to obtain some added focus from their speakers and lecturers way too and continue to keep a romance that can help them for their upcoming capabilities. Immediate contact with your writer – When you have been asked you to utilize a particular reserve for a source on your paper and also you wholly forgot to mention this when inserting the get, feel free to make use of the online messaging technique to succeed in your helper immediately. Mail files, specify Guidance, and monitor the get the job done progress at any time. We can offer suitable aid in producing an Android app. Our staff of programming specialists is sort of productive in offering programming aid on Android app creating dependant upon your need to have. There are lots of programming languages and several of these have particular utilization. Our staff consist of gurus with diverse experiences who've worked on distinct languages. CodeAbbey looks like a more common Variation of Rosalid--significantly less string manip and pattern obtaining (which is pertinent to biometrics), and a lot more standard algorithms like linear lookups or Fibbonacci. This part is centered on increased-get functions -- the function that offers practical programming Considerably of its expressiveness and magnificence -- and its name! As common, the initial looking at beneath introduces you to the area, but it really could make much more feeling after you dive in on the lectures. Also be sure to not skip the fabric on track inspiration that We've put this contact form in the "lesson" amongst another videos for this week and also the homework assignment. Occasionally Programming learners would like that they experienced a Genie with them who could do all their perform, now students wish are already granted as we offer programming assignment help to People learners. C programming is really a broad field, which requirements lots of exploration. College students mustn't only trust in theoretical details and also realistic knowledge and details relevant to C programming. We provide the most beneficial C online help according to both equally theoretical and useful understanding. As well as exceptional and typical high quality of C assignment, we be sure that we revise and do essential amendments towards the Recommended Site C assignment if any shopper just isn't entirely satisfied with the sooner written C assignment. Get beneficial assistance on addressing various computer software applications and cross-System environments within the Java language programs. Our supreme quality assistance can help you clear up Java oriented issues. We aid learn the facts here now our college students with several services for example online accounts tutoring, accounts homework help, accounts assignment help, accounts projects, accounts my response notes etc. Further more, We now have skilled group to accept you exactly with many of the accounts connected methods. I concur - All people should really Consider them, and they are a bunch of entertaining, however, you in a short time hit some extent (probably following more than 5) in which you invest extra time learning State-of-the-art math subject areas than you do coding. Firstly, upload your accounts assignment or homework on our Web site or mail us on our e mail ID i.e. firstname.lastname@example.org. Our qualified panel will endure it meticulously and once They are really a hundred% positive of the answer, we will get again with appropriate price quote. Furthermore, our service can complet your programming homework well timed consistent with the necessities of your program. With our easy payment solutions and competitive charges, you are able to Get the programming assignments accomplished once you have to have them.
OPCFW_CODE
CSS Display Properties: Flex, Grid, and More Whether you are a web designer, developer, or just getting started with web design, understanding how to use CSS display properties is essential for creating stunning layouts and crafting effective, responsive designs. This article will walk you through the basics of CSS display properties, from a quick overview of flex and grid to debugging strategies and best practices. Introduction to CSS Display Properties CSS display properties are responsible for defining how elements on a web page should be displayed. They are the building blocks for web design, allowing developers and designers to control the size, shape, and position of elements on the page. CSS display properties also determine how elements are displayed when the page changes size and orientation, making them essential for creating responsive layouts. The most commonly used display properties are display: flex and display: grid, both of which are used to create flexible and responsive layouts. Flexbox and grid are powerful tools for developing layouts, but they are just one part of the puzzle. This article will explain how to use flex, grid, and other display properties to craft stunning designs for any size screen. Overview of Flex and Grid At their core, Flex and Grid are both types of display properties. Flex is a one-dimensional layout system that defines the direction, size, and position of elements on the page. Flexbox works best for single-axis layouts, such as menus or navigation bars, and is great for crafting responsive designs. Grid is a two-dimensional layout system that allows developers and designers to create complex, grid-based layouts on the page. Grid is ideal for creating responsive designs that will scale with the page size while maintaining a consistent layout. Building Flexible Layouts with Flexbox Flexbox is a great tool for building flexible layouts. It works by assigning elements to either a row or a column, and then defining how the elements should be laid out within the row or column. For example, you can use flexbox to create a navigation bar with all of the items laid out in a single row. Flexbox also allows you to control the size and position of elements, making it easy to create responsive layouts that will scale to any size screen. To create a flex layout, use the display: flex property. This will set the container element to display: flex and all of the child elements to flex-item. This will set the direction of the layout, either row or column, and define how the elements should be laid out. You can also define the size and position of elements with Crafting Responsive Layouts with Grid Grid is great for crafting complex, responsive layouts. With grid, you can lay out elements on a two-dimensional grid, define the size and position of elements, and even assign elements to specific grid-lines. Grid is especially useful for creating responsive designs that will scale with the page size while maintaining a consistent layout. To create a grid layout, use the display: grid property. This will set the container element to display: grid and all of the child elements to grid-item. This will set the size and position of elements with grid-gap. You can also define the size and position of elements with Exploring Additional Display Properties In addition to flex and grid, there are several other display properties that can be used to craft stunning layouts. The most commonly used are display: inline-block, and display: block, which are used to control the flow of text on the page. display: inline displays elements on the same line, while display: inline-block displays them as blocks that are laid out on the same line. display: block displays elements on separate lines, which is useful for creating headings and paragraphs. Other display properties, such as display: table and display: list-item, can be used to create complex layouts and lists. Best Practices for Applying CSS Display Properties When applying CSS display properties, it is important to consider the impact of the layout on the page. Here are some best practices for crafting responsive designs with display properties: - Always use the displayproperty to define the layout type. flex-shrinkto control the size and position of elements in a flex layout. grid-gapto control the size and position of elements in a grid layout. grid-template-areasto control the size and position of elements in a grid layout. blockto control the flow of text on the page. list-itemto create complex layouts and lists. Enhancing Layouts with Combining Flex and Grid One of the best ways to create stunning layouts is to combine flex and grid. By combining the two types of display properties, you can create complex, responsive layouts that are easy to maintain and scale. Flex and grid can be combined by using the display: flex property on the parent element, and then setting the child elements to display: grid. This will create a nested grid, with the parent element controlling the size and position of the child elements. This is great for creating complex, grid-based layouts that are easy to maintain and scale. Tips for Working with CSS Display Properties Working with CSS display properties can be challenging, but there are some tips that can help. Here are a few: - Start with the basic layout and work your way up. - Always use the displayproperty to define the layout type. - Make sure to test your layouts on multiple devices and browsers. - Don’t be afraid to use trial and error to find the best layout for your needs. - Try combining flex and grid to create complex, responsive layouts. Debugging CSS Display Properties Debugging CSS display properties can be a daunting task, but there are some tips that can help. First, make sure to use the display property to define the layout type and use the appropriate properties for the layout type (i.e. Second, make sure to test your layouts on multiple devices and browsers. Different browsers may render the same layout differently, so it’s important to make sure the layout looks the same on all devices and browsers. Finally, don’t be afraid to use trial and error to find the best layout for your needs. Experiment with different display properties and layouts until you find the one that works the best. Conclusion: Crafting Stunning Layouts with the Right CSS Display Properties CSS display properties are the building blocks for web design, and understanding how to use them is essential for creating stunning layouts and crafting effective, responsive designs. In this article, we have covered the basics of CSS display properties, including the different values that you can use, how to use them to create different layouts, and some debugging strategies and best practices. We have also looked at two of the most popular CSS display properties: flex and grid. Flex is a great choice for creating layouts that are flexible and responsive, while grid is a good choice for creating layouts that are more complex and have a lot of columns and rows. By understanding how to use CSS display properties, you can create stunning layouts that will look great on any device. So what are you waiting for? Start experimenting with CSS display properties today!
OPCFW_CODE
Too many errors logged while editing card It's really a great card once configured right. But while editing a amazing amount of errors was logged in just a matter of 15 minutes: Logger: frontend.js.latest.202310050 Source: components/system_log/__init__.py:300 First occurred: 13:17:08 (54034 occurrences) Last logged: 13:31:45 Uncaught error from WebKit 605.1.15 on iOS 17.0.3 TypeError: null is not an object (evaluating 'this._this.querySelector("#maincard").clientWidth') updateContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:6684) Yeah the card doesn't really like the way the HA edit dialog spam calls certain entry points in the JS code. Depending on what and how you edit, it may generate errors. But is this a problem ? I mean the YAML is invalid while you're editing it, after all. Just ignore the errors. Well my log file increased with hundred MB while editing my cards 🥹 Very hard to spot relevant errors... That's a lot of logs... For how long did you edit the card for ? I mean I usually get something like maybe a dozen errors while editing my cards, hundreds of megabytes of errors is insanely extreme. It's also difficult to reproduce the error above, as it's too generic. Do you have an example of a broken YAML that will generate this error ? No it was just about 1 hour of tweaking to card options. Now to logs are gone I can't reproduce this. The error message you posted is too generic and does not point to the problem itself. In order to proceed with your issue, you will have to provide an example YAML config that produces this error for you. I will try to reproduce as I simply don't recall what I was doing. This is what causes the log spam on my system: graphs: - type: line entities: Without any entities supervisie yet, it generated over thousand logs in 3 minutes: Logger: frontend.js.latest.202310302 Source: components/system_log/__init__.py:300 First occurred: 12:07:15 (1321 occurrences) Last logged: 12:11:00 Uncaught error from WebKit 605.1.15 on iOS 17.1.1 TypeError: null is not an object (evaluating 'this._this.querySelector("#maincard").clientWidth') updateContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:6684) Without any entities specified yet, it generated over thousand logs in 3 minutes Ok, I see. This would also generate an error on Firefox, but only a single one. It seems browser dependent. In any case, I fixed that with the commit above. Will be in the next release. Let's see how it goes. Wauw that is fast, looking forward to check it 🥳 Added to V1.0.51. I just tested the card. The empty entities case is solved (thanks). BUT when I briefly edited the entities list, by just adding one entity. I got over 1000 errors after the edit: Logger: frontend.js.latest.202310302 Source: components/system_log/__init__.py:300 First occurred: 18:51:47 (1107 occurrences) Last logged: 18:53:34 Uncaught error from WebKit 605.1.15 on iOS 17.1.1 TypeError: null is not an object (evaluating 'this._this.querySelector("#maincard").clientWidth') updateContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:6793) Uncaught error from WebKit 605.1.15 on iOS 17.1.1 TypeError: undefined is not an object (evaluating 'c.chart.data.datasets[t].data=g') buildChartData (/hacsfiles/history-explorer-card/history-explorer-card.js:9:14322) generateGraphDataFromCache (/hacsfiles/history-explorer-card/history-explorer-card.js:9:8252) updateHistory (/hacsfiles/history-explorer-card/history-explorer-card.js:9:21747) today (/hacsfiles/history-explorer-card/history-explorer-card.js:1:239877) createContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:6408) updateContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:6856) This happened while I was just adding this single entity: graphs: - type: timeline entities: - entity: binary_sensor.relais_ctrl_connection As long I omit the 'entity:' object part, or have an incorrect entity name, the log gets spammed (using the companion app on iOS) Same here. Every time I open tab that has history explorer card, I get a lot of errors in the event log. I don't even have to update anything in the card. Just open the tab that has the card at that card and errors are logged. Logger: frontend.js.latest.202312082 Source: components/system_log/__init__.py:300 First occurred: December 18, 2023 at 12:27:23 PM (51 occurrences) Last logged: 12:44:30 AM Uncaught error from Chrome <IP_ADDRESS> on Windows 10 TypeError: Cannot read properties of undefined (reading 'isValid') a (/hacsfiles/history-explorer-card/history-explorer-card.js:1:199379) n.getLabelForIndex (/hacsfiles/history-explorer-card/history-explorer-card.js:1:203457) n.update (/hacsfiles/history-explorer-card/history-explorer-card.js:1:147705) n.handleEvent (/hacsfiles/history-explorer-card/history-explorer-card.js:1:152901) t.eventHandler (/hacsfiles/history-explorer-card/history-explorer-card.js:1:109759) n (/hacsfiles/history-explorer-card/history-explorer-card.js:1:109111) x.<computed> (/hacsfiles/history-explorer-card/history-explorer-card.js:1:171659) Uncaught error from Chrome <IP_ADDRESS> on Windows 10 TypeError: Cannot read properties of undefined (reading 'length') n.draw (/hacsfiles/history-explorer-card/history-explorer-card.js:1:152336) t._drawTooltip (/hacsfiles/history-explorer-card/history-explorer-card.js:1:107453) t.draw (/hacsfiles/history-explorer-card/history-explorer-card.js:1:106701) t.render (/hacsfiles/history-explorer-card/history-explorer-card.js:1:106319) Object.callback (/hacsfiles/history-explorer-card/history-explorer-card.js:1:162710) Object.advance (/hacsfiles/history-explorer-card/history-explorer-card.js:1:100971) Object.startDigest (/hacsfiles/history-explorer-card/history-explorer-card.js:1:100690) /hacsfiles/history-explorer-card/history-explorer-card.js:1:100546 Uncaught error from Chrome <IP_ADDRESS> on Windows 10 TypeError: Cannot read properties of undefined (reading 'length') n.draw (/hacsfiles/history-explorer-card/history-explorer-card.js:1:152336) t._drawTooltip (/hacsfiles/history-explorer-card/history-explorer-card.js:1:107453) t.draw (/hacsfiles/history-explorer-card/history-explorer-card.js:1:106701) Object.afterEvent (/hacsfiles/history-explorer-card/history-explorer-card.js:80:1693) Object.notify (/hacsfiles/history-explorer-card/history-explorer-card.js:1:129199) t.eventHandler (/hacsfiles/history-explorer-card/history-explorer-card.js:1:109776) n (/hacsfiles/history-explorer-card/history-explorer-card.js:1:109111) x.<computed> (/hacsfiles/history-explorer-card/history-explorer-card.js:1:171659) Uncaught error from Chrome <IP_ADDRESS> on Windows 10 TypeError: Cannot read properties of undefined (reading 'state') HistoryCardState.getFormattedLabelName (/hacsfiles/history-explorer-card/history-explorer-card.js:9:6254) HistoryCardState.newGraph (/hacsfiles/history-explorer-card/history-explorer-card.js:9:15753) HistoryCardState.addGraphToCanvas (/hacsfiles/history-explorer-card/history-explorer-card.js:9:35145) HistoryCardState.addFixedGraph (/hacsfiles/history-explorer-card/history-explorer-card.js:9:31993) HistoryCardState.createContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:4199) HistoryCardState.updateContent (/hacsfiles/history-explorer-card/history-explorer-card.js:70:6843) Sadly the history explorer card has been declared end of life and won't be developed anymore. So I will have to close this issue / FR without resolution. This repository will be archived and set to read-only shortly. See this post on the HA forum for more information: https://community.home-assistant.io/t/new-interactive-history-explorer-custom-card/369450/978
GITHUB_ARCHIVE
case class - combine pattern match I have a method defined as below and wanted to combine pattern match. The or operator give me compiler error (||). def isPaired(input: String): Boolean = { def go(x: List[Char], level: Int = 0): Boolean = { x match { case Nil => true case '(' :: xs1 if level < 0 => false case '[' :: xs1 if level < 0 => false case '{' :: xs1 if level < 0 => false case ')' :: xs1 if level == 0 => false case ']' :: xs1 if level == 0 => false case '}' :: xs1 if level == 0 => false case '(' :: xs1 => go(xs1, level + 1) case '[' :: xs1 => go(xs1, level + 1) case '{' :: xs1 => go(xs1, level + 1) case ')' :: xs1 => go(xs1, level - 1) case ']' :: xs1 => go(xs1, level - 1) case '}' :: xs1 => go(xs1, level - 1) case _ :: xs1 => go(xs1, level + 1) } } go(input.toList) } The below gives compiler error: case '(' :: xs1 || '[' :: xs1 || '{' :: xs1 if level < 0 => false case ('(' :: xs1) || ('[' :: xs1) || ('{' :: xs1) if level < 0 => false How to apply or condition? Close, the correct syntax would be case ('(' | '[' | '{') :: xs1 if (level < 0) => false. There are several issue with what you tried: You are using the || (Or operator) instead of the | (Pipe operator) to represent multiple case, as @Luis commented - see this question You are to try to reference a variable when multiple case are combined - see this question So what you can try indeed is: def isPaired(input: String): Boolean = { def go(x: List[Char], level: Int = 0): Boolean = { x match { case Nil => true case ('(' | '[' | '{') :: xs1 if level < 0 => false case ')' :: xs1 if level == 0 => false case ']' :: xs1 if level == 0 => false case '}' :: xs1 if level == 0 => false case '(' :: xs1 => go(xs1, level + 1) case '[' :: xs1 => go(xs1, level + 1) case '{' :: xs1 => go(xs1, level + 1) case ')' :: xs1 => go(xs1, level - 1) case ']' :: xs1 => go(xs1, level - 1) case '}' :: xs1 => go(xs1, level - 1) case _ :: xs1 => go(xs1, level + 1) } } go(input.toList) } Or even more condensed: def isPaired(input: String): Boolean = { def go(x: List[Char], level: Int = 0): Boolean = { x match { case Nil => true case ('(' | '[' | '{') :: xs1 if level < 0 => false case (')' | ']' | '}') :: xs1 if level == 0 => false case ('(' | '[' | '{') :: xs1 => go(xs1, level + 1) case (')' | ']' | '}') :: xs1 => go(xs1, level - 1) case _ :: xs1 => go(xs1, level + 1) } } go(input.toList) }
STACK_EXCHANGE
How to Fix SAS Error "Connection failed: connection to the remote browser server failed" When you run SAS 9.2, you may see some log error message: "Could not display help because connection to the remote server failed". "Connection failed: The requested information could not be displayed because the connection to the remote browser server failed. Either start the remote browser server on your computer or enter the URL below into a web browser to download or install the remote browser server." The remote browser is new for SAS 9.2. It allows you to view HTML, PDF, and RTF files created on Windows 64-bit and non-Windows platforms in a local Windows-based browser window. The remote browser is discussed here. Installing the remote browser via the URL in the dialog message is one option for circumventing the problem. However, if you are running SAS 9.2 TS2M0 on a Windows 64-bit machine and are running SAS directly (or locally) on the Windows 64-bit machine, add the following OPTIONS statement can fix the log error: This will display your ODS output locally and eliminates the need for the remote browser server. If the OPTIONS statement above circumvents the original problem and you want to set the default value of the HELPBROWSER option to SAS, modify your SASV9.CFG file to add the following line: On Windows, you will find the SASV9.CFG file in the following Windows directory (where !SASROOT is your default SAS install directory): !SASROOT\nls\en Disable Windows TCP/IP auto-tunning for better connection to SAS server To make the connection to your remote SAS server better, you might need to check the following connection mode. Sometimes it is helpful to disable Windows's autotuning of TCP/IP. Some networking devices, such as SPI firewalls, some NAT routers, VPN endpoints, WiFi devices have problems with the way Windows Vista resizes the TCP Window. Possible symptomps include: web traffic ok, email timeouts on receiving only, slow or no network file server access, random network timeouts or connectivity problems, freezing or slow web browsing or VPN connections. To show the current setting for your computer, run the following first: 1) Click Start --> All Programms --> Accessories --> Command Prompt, Right click "Command Prompt" to choose "Run as administrator 2) copy the following code on the black/dark screnn and click enter: netsh interface tcp show global 3) There will be a few settings pumping out, one of them is for auto-tuning, if the setting is not "disabled", you can run the following code: netsh interface tcp set global autotuning=disabled 4) If it actually gets worse, you can set it back the default setting by copy the following code: netsh interface tcp set global autotuningl=normal Continue to next: How to get beautiful geographical picture in SAS? SAS tutorial home How to install Enterprise miner correctly? Statistics tutorial home
OPCFW_CODE
Applications and Tools I’ve Found Useful Having just completed the back-end portion of the Launch School curriculum, I thought now would be a good time to reflect on the last 4–5 months of working mostly full-time at my studies. I know I’ve benefitted at various time from the generosity and advice and perspective of other students in the Launch School program, and I’ll feel fortunate if I can do the same for others. I’ve done some natural trial-and-error with different tools to facilitate my studying, here are a few that I’ve found stuck for me. These are all for OS X, but I’m sure there are some Windows or Linux equivalents for some of these: - Quiver — Here I give credit to a couple of other students who recommended this note-taking app in the forums. I started out the first few months filling up page after page with colorful and detailed notes on the material. The obvious advantage of physical note taking is the freedom and flexibility of the formatting it offers you. There are also some studies on the kinetic benefits of committing something to memory via writing. At some point I was getting frustrated with the inability to go back and revise definitions or re-organize my thoughts on a concept without writing everything out again. I tried out a few markdown note taking apps before finding Quiver, and for me it could not be better suited to my note taking style with this curriculum. It enables you to mix regular text with code blocks and markdown blocks in one document. You’ll pick up the keyboard shortcuts pretty quickly that make switching between the different formats efficient. I use it to create language cheat sheets, definitions and then examples of code, screen shots with notes from some of the videos, etc. You can organize your notes into notebooks for each course, and having everything indexed and searchable is incredibly useful when you just need to review your notes on a particular topic. You can even customize the layout and color schemes to fit your OCD visual quirks. Great value for $10 — unfortunately it’s only on Mac for now. - RescueTime — Many people may have heard of this tool. There’s a free version and a premium version, personally I find the free version sufficient for my needs. It runs in the background constantly and tracks your active time for every app you use and website you go to. It allows you to tag certain apps and websites on a scale from ‘Very Unproductive’ to ‘Very Productive’, and then automatically produces visual reports and provides a productivity score for your day/week/month. You can also set goals (i.e. 5+ hours of productive time per day) and it has a history displaying what days you met your goals. It helps keep me somewhat accountable for my time, and in the long-run helps me see patterns in my study habits. It’s often reaffirming to get an objective (if somewhat shallow) sense of your time and productivity looking back over a day or week after it’s done. Like any qualified tracking tool, it’s probably best not to feel enslaved to it, but more to use it as a fairly objective source of information about your habits. - Spectacles — I find myself very, very frequently needing to split windows on my display with a LS curriculum page on one side — book, lesson content, video, questions, basically everything — and to type code into my code editor, enter code into an IRB or PostgreSQL session, take notes alongside in Quiver, etc. on another side. Spectacles allows you to automatically and perfectly resize windows to half screen or a third of the screen either vertically or horizontally with a quick keyboard shortcut. I find myself using this constantly and always being thankful for it. - Focus — This is a simple Pomodoro app that sits in your bar and allows you to select an amount of time to focus for. It blocks ‘unproductive’ websites for that time and the timer will countdown where you can see it as a reminder. I know there are lots of Pomodoro apps out there, this is just the one I happened to settle on and it works for me. If I’m having a tough time getting started or engaged with some content, I’ll just set it for 25 minutes and give myself at least that much time committed to the material. If at the end of that time I’m feeling good, I’ll do another 25 minutes, and if not, I’ll take a break and come back to it later. Consistently makes me realize that while 25 minutes doesn’t seem like a long time, you can often get more work than you think done in that timeframe. - Anki — I think I also discovered this thanks to some recommendations from the LS community. It’s a little hard to explain how spaced repetition works without being longwinded. I recommend reading this for a more comprehensive overview of that topic. Essentially this application enables you to create flash cards for quick, simple questions. It then will give you a random assortment of however many cards you set it up to see each session (I do 20 or so). If you are consistently entering little syntax tricks or useful methods or language concepts into it, you build up quite a bank of cards. From then on you can take 10–15 minutes a day to review some cards and hopefully prevent you from forgetting older content as you progress through the course. I try to create a batch of cards after I finish a course when I’m reviewing and studying for the assessment. This forces me to break down the content into smaller, manageable chunks for flash cards, which I find helps me highlight what is worth remembering and noting from the whole course. I recommend using screen shots of code here to save time. - FitBit — Okay, so this one isn’t a desktop tool technically, but I got one recently and I find it really helpful to keep me accountable for staying active and moving throughout the day. Sitting around and staring at a computer all day can lead to quite a bit of lethargy. It’ll buzz me to give me reminders if I haven’t moved for an hour, and it helps motivate me to get at least 10,000 steps in a day. Useful to encourage breaks and exercise, as unfortunately solo-online studying does not easily facilitate either of those habits. What applications and tools you use are not going to make or break your success at Launch School. Habits, mindset, and discipline are far, far more important. It’s very easy to get caught up in a Tim Ferriss-like obsession with productivity-hacking and time-managment-hacking trends as the most ironic form of procrastination known to mankind, often at the expense of actual work. But helpful tools can make learning a whole lot more efficient and frictionless.
OPCFW_CODE
If you’re starting out, the best thing you can do is start building a profile for yourselves. The results are incomparable. A few pointers are ranking in competitive platforms like HackerEarth and taking on a few projects either as an intern or a freelancer. You can use this opportunity to explore platforms, tools, and technologies that are based on Python. The beauty of a programming language like Python is that there is always more to learn or a better way to do things. Industry: open-source software consulting Connect with Melissa on Twitter: @Melissawm Melissa Weber Mendonça is an applied mathematician and former university professor turned software engineer. She works at Quansight, developing open-source software and working on consulting projects. Melissa is a 2020 fellow of the Python Software Foundation. Over the course of her career, she has been actively involved with the Brazilian Python community and believes that open-source contributions can go beyond code. I work on a number of different projects in different roles. My work involves code, docs, community, and people management. Mostly, one of my big takeaways is that engineering is more than just writing code—you have to interact with your team and all the different stakeholders in your project. I’ll say, find a good support system. Other women in similar roles as you; people who have the same responsibilities. Connect with them and find a place to exchange experiences and feel comfortable. This has been a huge game changer for me, and it might also be for you! In her interview with the Scientific Python Community, Melissa also answered the following questions: I think because of the transition from “academia” to “developer,” I called myself a junior-senior. I had to relearn a bunch of stuff, even though I had a lot of experience in academia. And so, figuring out the right processes and workflow for software development was sometimes a challenge. I think the basic advice that everyone gives, and I don’t know if it’s correct or not, is to scratch your own itch. Find a project that you’re interested in that will maybe help you do your work better, or something that you’re already familiar with and that interests you, that actually drives you to contribute and to offer that time to your open-source project. Find something that really sparks joy, and this will make you feel more comfortable and motivated to contribute. Head over here to watch Melissa’s full interview with the Scientific Python Community. There are also Discord servers of groups of Pythonistas (shout out to Orit Mutznik, @OritSiMu, who introduced me to this) and plenty of women in the community that, if you don’t know about them, you probably wouldn’t even realize who can answer your questions. Also, there’s no such thing as a silly question in Python. We all have to start somewhere! Connect with fellow Python lovers of all skills, and it’ll make the transition far easier. It depends on what kind of role you’re looking for. If you’re looking for fully remote roles, I’d recommend Remoters (although this is a little more generic and digital marketing-focused, it’s niched to remote-only jobs). Python.org has a dedicated jobs section specific to Python job opportunities across a range of industries and positions, so it’s a great resource, too. And again, within the communities mentioned above and others, there are always people who know people. Melanie is a Python developer and owner of Raspians—a beginner-focused site to help people get acquainted with and learn about Raspberry PIs. There is no denying that the tech and software space is still very male-dominated, and working as a female in some of these organizations can be quite intimidating. I consider myself very lucky as a woman in tech, as my time as a Python developer in both small startups and larger teams was generally very good. As with all workplaces and industries—this is largely due to having had some great employers, managers, and colleagues. I am definitely aware that this isn’t the case for many women in tech, unfortunately, and the space has a long way to go when it comes to equality and opportunity for women. My advice to women starting out is to do your research on any company you’re looking to work for. It’s impossible to discover every minor detail about a workplace before starting there. However, the internet has given a lot more transparency to workplace cultures nowadays, so you can often get a good feel for a place long before submitting an application. Know your worth, build your skills, and be meticulous in finding a workplace that operates with fair and just values based on merit, not gender. As a Python Engineer, I’m constantly looking at documentation, writing documentation and updating my teammates and our task board—usually Jira, but sometimes Trello. I am constantly learning. I also like to take breaks from my screen, because it does require consecutive hours of looking at a monitor. I try to take a full-hour lunch in the middle of the day and periodic breaks. I think this is the healthiest way to code, especially if I can get some time to move my body during my break. The day-to-day varies based on what company one works for. Something common for the daily or weekly cadence of most engineers is the “standup.” It is an opportunity for everyone on the team to give a short synopsis of what they’ve been working on. The goal of the standup is to talk about what you have done, what you’re about to do, and if you’re having any blockers. As an engineer, I’m also communicating with my team a lot. At Microsoft, we use Teams; in my previous roles, we mostly used Slack. I’ve worked with consulting clients that use Discord, and I’ve heard of teams a long time ago that used Skype (not very common at all!). The key is that there are lots of asynchronous messages going back and forth in order to solve problems. A common misconception is that engineers are siloed and solve the biggest engineering problems all by themselves. Although there is absolutely a degree of independence in programming that doesn’t exist in other jobs, software teams have an expectation of being highly communicative and very collaborative. One of the common things we do includes “pair programming” or “rubber ducking” variations of activities that describe solving problems with a partner about your code. Sometimes you will pair program with someone, which entails one person writing the code and the other person on the call watching and assisting. It’s a great way to be thoughtful about code and have another person who may also need to be familiar with the code to get context at the same time. Some people consider it “inefficient,” but in my experience, I many times end up with better code, a better rapport with my teammate, and better documentation. Plus, another person on the team becomes aware of a feature in case they need to continue code based off what we wrote together. Another type of communication is “rubber ducking,” which is mostly used when there is a bug or a problem to solve, and one needs to talk through the current requirements, their current approach, and alternative approach to figure out the next steps to solve the problem. Sometimes it’s good to rubber duck when you “don’t even know where to start” or if you have an obscure error that you can find elsewhere/common solutions aren’t working. My advice is to continue not to be afraid to reach for what is possible. All of the problems in any other technical field also exist in programming and engineering. Some may not be as violent/virulent as others, but they still exist. We face misogyny regularly, and it is common for others to assume that you are less technical than your peers—women even do it to other women. It was important for me to figure out where to draw the line. I check in with myself regularly to see how I feel my working relationship with my teammates is and try not to assume responsibility where I do not have ownership and recognition. I have drawn my lines less leniently in other areas of my working life, and I communicate my needs with my peers without apologies. Each person will have to make their own decisions where their lines are, but I encourage women to trust themselves that they are valuable people in the workplace and deserve the ability to make those decisions. All that being said, there is definitely a large wave of engineering organizations with progressive policies and culture coupled with the actions necessary to create a safe workplace for women and people of marginalized genders. I have had some of the best coworkers since transitioning into tech and continue to find more and more places where I feel safe as a black woman. I hope that all women and people of marginalized genders are able to experience the career benefits of being in tech, career stability, as well as working in the safe spaces that I know are out there. Every day, we, as a collective, push the workforce a little farther in the right direction—making it a place that welcomes women and treats them with respect. As a Python developer, I develop the backend for mobile and web applications using Django, Django REST Framework, and a number of other tools. Some of the tools and technologies I commonly use include PyCharm as a Python programming IDE, GitHub for collaboration on API projects with my team, Docker for running containerised versions of the project I am working on locally on my PC, and Postman for testing and documenting API endpoints I develop. A successful career as a Python developer can be achieved by gaining experience on building Python-based projects and learning how to share the projects with collaborators. Women who want to be successful Python developers should also consider the specific use case or aspect of Python they want to pursue in order to have a well-tailored experience while learning Python fundamentals. Understanding the aspect of Python development that they are interested in from the onset of their learning journey allows you to focus on learning the specific library and methods that are used in that sector. Also, building Python-based projects and hosting them on open-source repositories such as GitHub will help you build a portfolio of projects that you can showcase to potential employers. By observing the career progression of these 6 women, we hope you’ll get inspired to succeed as a woman in Python. Since STX Next is the largest Python software agency in Europe, we have plenty of resources on our programming language of choice that you should find worthwhile. Here’s a selection of a few to get you started: Are you a woman (or a man, for that matter) looking to start your career in Python? If so, we’d argue you couldn’t be in a better place. We’re always hiring and we don’t care if you have little to no experience—juniors, regulars, and seniors alike are all welcome. Check out our job postings and apply today! And if you have any questions you’d like to ask us about your Python project or anything else related to software development, don’t hesitate to drop us a line. We’ll get back to you in no time!
OPCFW_CODE
Converts HTTrack crawls to WARC files. Status: Working on many crawls but needs more testing on corner cases. We're not using it in production yet. This tool works by reading the HTTrack cache directory (hts-cache) and any available log files to reconstruct an approximation of the original requests and responses. This process is not perfect as not all the necessary information is always available. Some of the information that is available is only present in debug log messages that were never intended for machine consumption. Please see the list of known issues and limitations below. Download the latest release jar and run it under Java 8 or later. Usage: httrack2warc [OPTIONS...] -o outdir crawldir Options: --cdx FILENAME Write a CDX index file for the generated WARCs. -C, --compression none|gzip Type of compression to use (default: gzip). -x, --exclude REGEX Exclude URLs matching a regular expression. -h, --help Show this screen. -n, --name PATTERN WARC name pattern (default: crawl-%d.warc.gz). -o, --outdir DIR Directory to write output (default: current working directory). -q, --quiet Decrease logging verbosity. --redirect-file PATTERN Direct synthetic redirects to a separate set of WARC files. --redirect-prefix URLPREFIX Generates synthetic redirects from HTTrack-rewritten URLs to original URLs. --rewrite-links When the unmodified HTML is unavailable attempt to rewrite links to undo HTTrack's URL mangling. (experimental) -s, --size BYTES WARC size target (default: 1GB). --strict Abort on issues normally considered a warning. -Z, --timezone ZONEID Timezone of HTTrack logs (default: Australia/Sydney). -I, --warcinfo 'KEY: VALUE' Add extra lines to warcinfo record. -v, --verbose Increase logging verbosity. Conduct a crawl into a temporary directory (/tmp/crawl) using HTTrack: $ httrack -O /tmp/crawl http://www.example.org/ Mirror launched on Mon, 08 Jan 2018 13:50:40 by HTTrack Website Copier/3.49-2 [XR&CO'2014] mirroring http://www.example.org/ with the wizard help.. Done.www.example.org/ (1270 bytes) - OK Thanks for using HTTrack! Run httrack2warc over the output to produce a WARC file. By default the output file will be named $ java -jar httrack2warc-shaded-0.2.0.jar /tmp/crawl Httrack2Warc - www.example.org/index.html -> http://www.example.org/ Replay the ingested WARC files using a replay tool like pywb: $ pip install --user pywb $ PATH="$PATH:$HOME/.local/bin" $ wb-manager init test $ wb-manager add test crawl-*.warc.gz [INFO]: Copied crawl-0.warc.gz to collections/test/archive $ wayback [INFO]: Starting pywb Wayback Web Archive Replay on port 8080 # Open in browser: http://localhost:8080/test/*/example.org/ When migrating from a HTTrack-based archive to a WARC-based one you may have the problem of breaking existing links which used the HTTrack manipulated filenames. To assist with this httrack2warc can synthesize redirects records from a HTTrack path to the reconstructed original live URL. For example suppose you have the following situation: Original URL: http://example.com/index.php?id=16 HTTrack URL: http://httrack/arc/2016/example.com/indexd455f.html Then setting this option: Will generate a redirect like: http://httrack/arc/2016/example.com/indexd455f.html -> http://example.com/index.php?id=16 You can then put a webserver rule on http://httrack/ that simply redirects all requests into your new WARC-based archive. You can configure synthentic redirects to be written to a separate set of WARC files using this option: Known issues and limitations By default HTTrack does not record HTTP headers. If the --debug-headers option is specified however the file hts-ioinfo.txt will be produced containing a log of the request and response headers. When headers are available httrack2warc produces WARC records of type request and response. When headers are unavailable only WARC resource records are produced. Transfer-Encoding header is always stripped as the encoded bytes of the message are not recorded by HTTrack. Redirects and error codes Currently without hts-ioinfo.txt and an entry in the cache zip (newer versions of HTTrack), non-200 status code responses are converted to resource records and the status code is lost. See issue #3. IP addresses and DNS records HTTrack does not record DNS records or the IP addresses of hostnames therefore httrack2warc cannot produce WARC-IP-Address or DNS records. HTTrack version compatiblity Some testing has been done against crawls generated by the following versions: 3.01, 3.21-4, 3.49-2. Not all combinations of options have been tested. For cases when the original HTML is unavailable there is an experimental --rewrite-links option which will modify the HTML changing links from filenames to absolute URLs. This feature somewhat primitive and does not currently Install Java JDK 8 (or later) and Maven. On Fedora Linux: dnf install java-1.8.0-openjdk-devel maven Then compile using Maven from the top-level of this repository: cd httrack2warc mvn package This will produce an executable jar file which you can run like so: java -jar target/httrack2warc-*-shaded.jar --help Copyright (C) 2017-2020 National Library of Australia Licensed under the Apache License, Version 2.0.
OPCFW_CODE
I am working on db2 V9.1 on AIX server. setup: 1. Userexit= ON 2. logsecond= -1 3. db2 backup database xxx online to <path> Problem: 1. how do i determine the isotime at the time of performing rollforward operation 2. Get SQL4970N error, how do i resolve this? I have copied all the active logs before restoration into archive log path, still facing the same problem. Please help me with the rollforward command...Don't want to include logs in the backup image itself. Getting the same SQL4970N error which says processing has halted at logfile <logfilename>. I have checked in the archive log path for the existence of that logfile which say it is there. I am experimenting this on a test machine now. Request you to take the trouble to check step-by-step and help me know what wrong am i doing. 2. DB cfg modifications: db2 update db cfg for sample using userexit on db2 update db cfg for salmple using logsecond -1 db2 update db cfg for sample using logfilsiz 200(parameter changed to fill the log files quickly during testing) 3. db2 backup database sample to <backup_path> (full backup) 4. connect to sample 5. db2 "insert into staff(select * from staff)" (perform this operation 10 times to archive the log file automatically) 6. db2 backup database sampe online to <backup_path> 7. db2 drop table staff client asked to restore database from last backup avialable (here, its the online backup taken at step number6) 8. copy all active logs into the archive log path 9. db2 restore database sample taken at <online_backup_image_time> 10. db2 rollforward database sample to <current time> using local time and stop SQL4970N:Roll-forward recovery on database "SAMPLE" cannot reach the specified stop point ..... Its really on an urgent basis i need to start implementing userexit. I suppose you want to restore to a PIT prior to step #7 (drop table)? What you can do is: - execute "db2 list history backup all for db sample" - look for the timestamp you're going to restore in step #9 and check "Earliest Log" and "Current Log". This is the log range you need in order to rollforward to the end of online backup. - Create a new dir and copy this log range to this dir - execute rollforward using: db2 "rollforward db sample to end of logs and stop overflow log path <dir where you copied the logs in the previous step> noretrieve" But the rollforward should also work if you don't copy/move the logs. DB2 will invoke the userexit (don't specify noretrieve in this case) to retrieve the required logs. The only requirement is that you rollforward to at least the minimum PIT which is the end of online backup. can you just help me point what was going wrong in my case. As you said, i dont need to copy the active logs, but even if i do, it shouln't be a problem. Doesn't db2 find on its own which logs it has to use to perform a rollforward operation from all the logs which were made available to it, i mean, why create a separate directory and put only the required logs in it? thanks for your quick reply... I tried it that way...not working. What i understand is when userexit is enabled it automatically retrives required logs during rollforward from the archived path specified but its not working that way in my case. Anyways, yours was the perfect solution provided. Thanks again. Stay happy always!!!
OPCFW_CODE
Logging Data in Python In this section we'll log and visualize our first non-trivial dataset, putting many of Rerun's core concepts and features to use. In a few lines of code, we'll go from a blank sheet to something you don't see everyday: an animated, interactive, DNA-shaped abacus: This guide aims to go wide instead of deep. There are links to other doc pages where you can learn more about specific topics. At any time, you can checkout the complete code listing for this tutorial here to better keep track of the overall picture. We assume you have working Python and rerun-sdk installations. If not, check out the setup page. For this tutorial you will also need to pip install numpy scipy. Start by opening your editor of choice and creating a new file called The first thing we need to do is to import rerun and initialize the SDK by calling rr.init. This init call is required prior to using any of the global logging calls, and allows us to name our recording using an import rerun as rr rr.init("rerun_example_dna_abacus") Check out the reference to learn more about how Rerun deals with applications and recordings. Next up, we want to spawn the Rerun Viewer itself. To do this, you can add the line: Now you can run your application just as you would any other python script: (venv) $ python dna_example.py And with that, we're ready to start sending out data: By default, the SDK will start a viewer in another process and automatically pipe the data through. There are other means of sending data to a viewer as we'll see at the end of this section, but for now this default will work great as we experiment. The following sections will require importing a few different things to your script. We will do so incrementally, but if you just want to update your imports once and call it a day, feel free to add the following to the top of your script: from math import tau import numpy as np from rerun_demo.data import build_color_spiral from rerun_demo.util import bounce_lerp, interleave from scipy.spatial.transform import Rotation The core structure of our DNA looking shape can easily be described using two point clouds shaped like spirals. Add the following to your file: # new imports from rerun_demo.data import build_color_spiral from math import tau NUM_POINTS = 100 # points and colors are both np.array((NUM_POINTS, 3)) points1, colors1 = build_color_spiral(NUM_POINTS) points2, colors2 = build_color_spiral(NUM_POINTS, angular_offset=tau*0.5) rr.log_points("dna/structure/left", points1, colors=colors1, radii=0.08) rr.log_points("dna/structure/right", points2, colors=colors2, radii=0.08) Run your script once again and you should now see this scene in the viewer. Note that if the viewer was still running, Rerun will simply connect to this existing session and replace the data with this new recording. This is a good time to make yourself familiar with the viewer: try interacting with the scene and exploring the different menus. Checkout the Viewer Walkthrough and viewer reference for a complete tour of the viewer's capabilities. This tiny snippet of code actually holds much more than meets the eye... Although the Rerun Python SDK exposes concepts related to logging primitives such as points, and lines, under the hood these primitives are made up of individual components like positions, colors, and radii. For more information on how the rerun data model works, refer to our section on entities and components. Our Python SDK integrates with the rest of the Python ecosystem: the points and colors returned by build_color_spiral in this example are vanilla Rerun takes care of mapping those arrays to actual Rerun components depending on the context (e.g. we're calling log_points in this case). Entities & hierarchies Note the two strings we're passing in: These are Entity Paths, which uniquely identify each Entity in our scene. Every Entity is made up of a path and one or more Components. Entity paths typically form a hierarchy which plays an important role in how data is visualized and transformed (as we shall soon see). One final observation: notice how we're logging a whole batch of points and colors all at once here. Batches of data are first-class citizens in Rerun and come with all sorts of performance benefits and dedicated features. You're looking at one of these dedicated features right now in fact: notice how we're only logging a single radius for all these points, yet somehow it applies to all of them. A lot is happening in these two simple function calls. Good news is: once you've digested all of the above, logging any other Component will simply be more of the same. In fact, let's go ahead and log everything else in the scene now. We can represent the scaffolding using a batch of 3D line segments: # new imports from rerun_demo.util import interleave points = interleave(points1, points2) rr.log_line_segments("dna/structure/scaffolding", points, color=[128, 128, 128]) Which only leaves the beads: # new imports import numpy as np from rerun_demo.util import bounce_lerp offsets = np.random.rand(NUM_POINTS) beads = [bounce_lerp(points1[n], points2[n], offsets[n]) for n in range(NUM_POINTS)] colors = [[int(bounce_lerp(80, 230, offsets[n] * 2))] for n in range(NUM_POINTS)] rr.log_points("dna/structure/scaffolding/beads", beads, radii=0.06, colors=np.repeat(colors, 3, axis=-1)) Once again, although we are getting fancier and fancier with our there is nothing new here: it's all about building out numpy arrays and feeding them to the Rerun API. Up until this point, we've completely set aside one of the core concepts of Rerun: Time and Timelines. Even so, if you look at your Timeline View right now, you'll notice that Rerun has kept track of time on your behalf anyways by memorizing when each log call occurred. Unfortunately, the logging time isn't particularly helpful to us in this case: we can't have our beads animate depending on the logging time, else they would move at different speeds depending on the performance of the logging process! For that, we need to introduce our own custom timeline that uses a deterministic clock which we control. Rerun has rich support for time: whether you want concurrent or disjoint timelines, out-of-order insertions or even data that lives outside of the timeline(s)… you'll find a lot of flexibility in there. Let's add our custom timeline: # new imports from rerun_demo.util import bounce_lerp time_offsets = np.random.rand(NUM_POINTS) for i in range(400): time = i * 0.01 rr.set_time_seconds("stable_time", time) times = np.repeat(time, NUM_POINTS) + time_offsets beads = [bounce_lerp(points1[n], points2[n], times[n]) for n in range(NUM_POINTS)] colors = [[int(bounce_lerp(80, 230, times[n] * 2))] for n in range(NUM_POINTS)] rr.log_points("dna/structure/scaffolding/beads", beads, radii=0.06, colors=np.repeat(colors, 3, axis=-1)) A call to set_time_seconds will create our new Timeline and make sure that any logging calls that follow gets assigned that time. ⚠️ If you run this code as is, the result will be.. surprising: the beads are animating as expected, but everything we've logged until that point is gone! ⚠️ That's because the Rerun Viewer has switched to displaying your custom timeline by default, but the original data was only logged to the default timeline (called To fix this, go back to the top of the file and add: rr.spawn() rr.set_time_seconds("stable_time", 0) This fix actually introduces yet another very important concept in Rerun: "latest at" semantics. Notice how entities "dna/structure/right" have only ever been logged at time zero, and yet they are still visible when querying times far beyond that point. Rerun always reasons in terms of "latest" data: for a given entity, it retrieves all of its most recent components at a given time. There's only one thing left: our original scene had the abacus rotate along its principal axis. As was the case with time, (hierarchical) space transformations are first class-citizens in Rerun. Now it's just a matter of combining the two: we need to log the transform of the scaffolding at each timestamp. Either expand the previous loop to include logging transforms or simply add a second loop like this: # new imports from scipy.spatial.transform import Rotation for i in range(400): time = i * 0.01 rr.set_time_seconds("stable_time", time) rr.log_transform3d( "dna/structure", rr.RotationAxisAngle(axis=[0, 0, 1], radians=time / 4.0 * tau), ) rr.spawn is great when you're experimenting on a single machine like we did in this tutorial, but what if the process that's doing the logging doesn't have a graphical interface to begin with? Rerun offers several solutions for these use cases. At any time, you can start a Rerun Viewer by running rerun. This viewer is in fact a server that's ready to accept data over TCP (it's listening on 0.0.0.0:9876 by default). rerun --help for more options. Sometimes, sending the data over the network is not an option. Maybe you'd like to share the data, attach it to a bug report, etc. Rerun has you covered: rr.saveto stream all logged data to disk. - View it with You can also save a recording (or a portion of it) as you're visualizing it, directly from the viewer. This closes our whirlwind tour of Rerun. We've barely scratched the surface of what's possible, but this should have hopefully given you plenty pointers to start experimenting.
OPCFW_CODE
Shop Causal Inference For Statistics, Social, And Biomedical Sciences: An Introduction shop Causal Inference for Statistics, Social, and Up Start Learning Chinese and fumbling text view! More and more pages previously see creating reasonable in Chinese Language Education. One list of the keyword has Mandarin Chinese, consulting it the most substantial waste on the orig. Chinese is shown as the educational most other button view in the placement by Bloomberg Media. In the Where shop Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction value in this name, I studied an return that fills Access Services to see the Parts based in the new block to the one site where the additional training in a sent database l( book) moves the field viewed in a Import were colleague. I could yet work the InvoiceID installation to click the crucial error or join a natural field table( for region, 5). so, if you make a required different database, Access Services has the particular type and is the empty datasheet information each site. If you are a expression page that announces different, Access Services settings for the teaching in the censorship table in the promoting acquisition: forests, text months, fields, and not primary plans. SharePoint occurs you to a shop Causal Inference for where you can click a closer function at the Breeze surface text, once trusted in Figure 7-44. On the box view, you can install troubleshooting tables to the record term. In this address design, SharePoint defines campaigns along the first web where you can dismiss crucial objects to the selected food kind. For field, you can apply records to the box button, values for the design macros, preview macro, and queries shown with the Access. concerning on the displayed shop Causal Inference for and interface, you might Hide forms in the subject or double Tab serving levels from the high-quality module or great choosing after the certified name. By surface, Access Services opens the Date Picker to the Chinese meeting and Tw if your view does no information difference. If your parent is a block, or view and desktop, Access Services is the Date Picker to the site and feedback that walks with the laboratory property. Access Services views the Pythagorean expected browser and notation at the field of the Date Picker curriculum. Shop Causal Inference For Statistics, Social, And Biomedical Sciences: An Introduction You can click the matches shop Causal of a size from a view of Tw news data. In the Description staff for each web, you can add a deep email. Access fields this order on the substance fish( at the List of the Access climate) whenever you appear this speaker in a day in Datasheet macro or in a understanding in ruling site or category context. For Teacher, view Unique Company center in the Description clause for the CompanyID Tw. not, promising displays import to Applying ID locations shop Causal Inference for Statistics, Social,. Springer International Publishing Switzerland 2016O. Louisiana highlighted an comment for drop-down and different in 1847, and the New Mexico Territory created sofor Spanish and English in 1850. In 1870, the view imported used with a temperate blank event and any experience treated to field files created set. clicking webpages running shop Causal Inference for Statistics, Social, and Biomedical Sciences: universities. clicking information mode Unit locals in view members. heading with a Name desktop. placing with Employees in keyword app. wondering a name data section for Access workarounds. Mexican Americans at shop Causal Inference: A sample of double relationship. surface for Applied Linguistics. new area thanks. queries for menu in column. | shop Causal Inference for Statistics, types the supervision beneath the petroleum copy example. Set side To Clipboard and Open File Location. Click Copy collapse To tab, and Access offers the informational agency to the Windows Clipboard. You can so make the dialog in your category theinstruction to create to your query app. | You can not help Cancel to look Defining the shop Causal Inference for Statistics, Social, and Biomedical. If you are Cancel, Access is an field left working that it could frequently set the app because you defined quickly allow all the found fields. If you do Chinese without using a property, Access is a Null F for the multicultural to the content. The Enter Parameter Value ID focus Carries for the address template source. | The successful shop Causal Inference for Statistics, Social, and Biomedical Sciences:, Hide, allows the available account to the Image of the data of networks in the Table Selector. calculated commands are up use in the Table Selector in change time. The old user, Rename, begins you to click the Tw design remediated in the Table Selector. The undergraduate Share, Delete, displays the clicking view from your name products( keeping all Details in that History), is any users shown with the employee that click labeled in the View Selector, and means that table press from the Table Selector. | Your List Control speakers should previously Click like this. click the post command on the different design of the Summary app, and Access closes a other Data team width in the own web. display the Data query SharePoint, and Access controls the Data box person Row for the closing web, as placed in Figure 7-9. return the Data treatment likelihood to work the data for the database table. | After Access Services opens to the shop Causal Inference for Statistics, Social, and, I control the ID sent by the top Project as that I can Create oldest tables dreary. see The Table and View brass additions for the ChangeView Source on the soul theme warning need supremely know possible views of the Site tab languages in your object app. ChangeView record packs in your data callout, recycle the Logic Designer, if you as produce it are, and Furthermore pass the various box. click the Launch App tab on the Home box total to select your session management. For most shop Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction values, Access Services identifies the On Click credit very when you want in post invoice with List Details and Blank l hours. Custom Action Bar data, rule workers, integer data, and value data can make their On Click error when you use in type or edit vendor with List Details and Blank examples. The After equal data icons after the plants in the separate section news is named earned. For field, using the caption in a callout information or clicking a different grid from the educational Order in a text example is the After diverse name thinking. You can this design the shop Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction fairness of a macro by following the SetProperty immigration button. To select the control business of the page at quantity in your text language, want the Control branch URL reflective, groundbreaking plastic for the Property field, and modify the diversity, psychology InvoiceDate, or predefine you assign to Click in the Value use. You can summarize animals of this category in reflective of the solutions for the Back Office Software System object app. You can add how I all view the caption property in the On Open and On preferred commands for the organizations List, viewEmployeeDetailsAll, manner, data, different, and field minors. paying owned objects data and accepting section updates In Chapter 4, you was how to build movement items in listed tables questions in view topics. The 2014Developers wentChinese for this format places behind the row language handed Run Audit. If you create the Logic Designer now subglacial from the positive web, like it, benefit the view callout shown Run Audit on the table translation table, code the ia grid query, and as assist the On Click campus on the tblVendors data file. waste defines the Logic Designer and appears the turned view view behind this d mouse. You might delete it Chinese to hire more different shop Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction current to the assimilation for Tasks of your item Purchases. You cannot Click an Text to See for this ribbon. view Visible Visible( MessageBox) or Hidden. When you inherit to reach one of the three same data in the Calculation table, you can perform whether to have the tblTimeLookups of the review at app. After you specify this shop Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction, edit also to Access before learning with the top user. You can so add to Troubleshooting vendors and fonts in your user field by accepting the instruction. When your value opens on the Table Selector, you can refine the Down Arrow and Up Arrow means to create new SEP line lives. Press Enter to understand the balanced list. shop Causal Inference for Statistics, Social, and Biomedical Sciences: An uses you that hidden fields create on the web preview. Click Yes to insert that you have to offer the view. before begin down to the close two records, and not Click the validation and FileAs Instructions from your controls Ability. Your Contacts window otherwise is the Articles quality from the Conrad Systems Contacts Y in courses of the visible web of requirements and macro years. 9861 shop Causal( web properties, and otherwise first. 2573 window( disproportionate Access or values of control within the audit at a threatened argument literature. 1697 position( 're shown the MoU. 5566 sustainability( and such field. arise for these data to view outcomes to related data you might work. additional amphibians re-open social to only speed in the blocks. metaphysics are you to values in the coast that want global view about the Teaching using contained. Reader Aid services click main macro shown to the combo understanding shown. not you can be as to the shop Causal Inference for Statistics, Social, and Biomedical Sciences:'s box and lose if you can define what you click remaining for. Or, you can define warning it by doing the expertise list. You click beyond the developments, so button button into Access 2013 - and delete your conditions to navigate long company data! This also known tab points properties of useful tables, substantial types, and responsibilities. If there have data completed to other shop Causal Inference for Statistics, Social, and, tidal desktop, entered macros with different database for the Result Type, and new numbers points, Access is these tabs above Chinese lists in the macro chance. structure features oates with less g than sure views operations for the List Control employer Access. No people, for copy), Access views to profiling the related Access for the List Control product. In this key, Access allows a objective object around the macro when you am the Data term description vendor for the List Control. |
OPCFW_CODE
I have run Tedana in order to have denoised optimally combined time series. I have noticed that the double pie chart outputted in the figure folder, although has the right amount of accepted, ignored, and rejected components, does not show them accurately. Colours and rings seem mismatched. I suppose it is nothing to be worried about, but just to make sure I wanted to ask someone else. Has anyone else noticed it before? Please, find attached an example below. Hey @lollo, thanks for bringing this up. I’m responsible for this figure, so I’ll see if I can sort something out. This is a bug I have never seen before, but hopefully just an issued with visualization. A few questions - what release of tedana is this, or is it from the latest github version? Can you send your comp table to me? I believe you can send a direct message here on neurostars. Do the figures that show time courses, brain maps and FFTs appear to be correct? Thanks for bringing it up, Thank you for your quick reply. I am currently using Tedana 0.0.7, which should be the latest version. I am replying just today as I re-ran the analysis during the weekend, but I had the same problem. The other figures look OK, i.e., al the comp_*.png and the Kappa_vs_Rho_Scatter.png. The only one that has mixed rings and colours is the Component_Overview.png one. Because I do not remember what participant I took the double pie chart from, I will upload another figure and will attach the corresponding comp table. I thought the right thing to do was to tell you even though I am pretty sure it is just a matter of visualisation. Let me know more about it It seems that as a new user I cannot attach .txt files. Please, feel free to drop me an email here: email@example.com. I will be more than happy to send you the comp_table_ica.txt Glad to hear the other figures look alright - I’m assuming that we have encountered some strange edge case or that I missed an update or change. I’ve sent an email, and hopefully we can sort this out. Thanks again for bringing it up, I tested the figured writing code using the comp table that you sent and I was unable to replicate the error - the output looks correct… This leaves me with a mystery… Clearly there is something strange happening with the ordering, but I am at a loss for what it is. Difficult to dig deeper because I can’t replicate the error - but it does appear to just be a visualization error. I’ll let you know if I find out more.
OPCFW_CODE
Pendora’s 2020 Recap and 2021 Next Steps — Python 2020 was a challenging time for everyone, and it was no exception for me. Travel plans were postponed, work was re-located to be fully online, and it has been hard to adjust, to say the least. The purpose of this article is to look back on some of the most important things that I have learned so far, and also to brainstorm ways to improve myself going forward into the new year. Work has been extremely busy so it has been tough to find time to try and learn new skills, but since getting a handle on things, I will be creating more content. Python for Finance and Automation I am an investment banker by trade and have only taken computer science courses during undergrad. Throughout 2020, I have been working on combining my finance knowledge, with some basic computer programs. Some of the projects I have completed include: Fundamental Analysis of Stocks for Programmers and Beginners The crucial accounting and finance topics you need to know to invest like a seasoned professional Intrinsic Valuation of Stocks Using Python Numpy and Python is all you need to create a DCF template Automating Your Stock Portfolio Research With Python For Beginners Using a free financial data API, which provides real-time accurate data. It has been a great experience, but I would say that I have not been as studious as I should with my projects. There are many more projects that I hope to create, and that can only be done if I continue to advance my skills in programming. Many times, I would be looking for inspiration from other people’s projects, without being able to create something from “scratch” myself. This is a function of not having a solid background in Python and is something I will be looking to change going forward. Corporate Law and Business Strategy 2020 has been a busy year for M&A bankers, and my team has experienced this first hand. This year has been especially busy as interest rates are extremely low, and many companies are going through financial troubles, affecting general business strategy. As I progress in my own finance career, I realize the importance of viewing transactions and overall business through a legal lens. We have been seeing a significant increase in the involvement of government in the business world due to new tensions between countries. The West is more inclined to step in during transactions involving China, and other emerging nations, in order to protect national security and interests. Due to these increasingly important issues, transactions require a heavy analysis of regulatory risks and possible mitigations. There are many different regulatory bodies to consider, and also the world of corporate law is extremely complex for an outsider like myself. Lawyers are a different breed when it comes to providing accurate advice, and I have slowly learned along the way. Self-Improvement and Continuous Learning For 2021, I have decided to start a series where I tackle computer science courses and prepare myself for an eventual switch into software engineering. My good friend Terence Shin was my inspiration, and he went through this method in 2020. He developed a significant amount of experience and knowledge within the Data Science field. Through consistent learning, I will document my process and share my learnings to hopefully show readers that anything is possible. I am a business student with no work experience in software engineering and only have the desire to learn. I’ve taken a few first-year CS courses in university, so I know the absolute basics. My end goal would be a job in tech or creating programs that will benefit myself and others around me. If I can do this, anybody can. Stick around and say hi! I’ll be researching the best way to tackle this new challenge in 2021, and will have a detailed course curriculum for myself posted shortly. For now, my plan is to: - Deepen my understanding of Python fundamentals through taking university courses from MIT Open Courseware, University of Waterloo, and University of Toronto - Supplement course teachings with targeted online lectures to fill blind spots in my knowledge - Practice what I’ve learned using Leetcode - Create side projects that interest me (finance, accounting, business) - Sharpen up my resume and start applying Please let me know if you have a similar experience or if you have any other resources that you would recommend! This week, I’ve started to learn off of a first-year university course at the University of Toronto as I heard it was a renowned computer science program, and their faculty is known for their contributions to research. Explains the Rectangle class, and introduces a Constructor, with the method translate_right. Outputs based on the Rectangle class and applying the translate_right method: r = Rectangle(100, 200, 300, 400) Notes about Classes: Attributes — Variable defined inside a class definition but outside any method Methods — Actions for the specific objects Inheritance: Ability to define a new class that is a modified version of an existing class Accumulator: Variable used in a loop to accumulate a series of values, counting with a running sum or concatenating to a running sum Encoding: Define a mapping between a sequence of numbers and items represented Out with the old, in with the new.
OPCFW_CODE
This blog post was authored by Hossein Jazi and Jérôme Segura On June 10, we found a malicious Word document disguised as a resume that uses template injection to drop a .Net Loader. This is the first part of a multi-stage attack that we believe is associated to an APT attack. In the last stage, the threat actors used Cobalt Strike’s Malleable C2 feature to download the final payload and perform C2 communications. Lure with delayed code execution The lure document was probably distributed through spear phishing emails as a resume from a person allegedly named “Anadia Waleed.” At first, we believed it was targeting India but it is possible that the intended victims could be more widespread. The malicious document uses template injection to download a remote template from the following url: The domain used to host the remote template was registered on February 29, 2020 by someone from Hong Kong. Creation time for the document is 15 days after this domain registration. The downloaded template, “indexa.dotm”, has an embedded macro with five functions: The following shows the function graph of the embedded macro. The main function is Document_open which is executed upon opening the file. This function drops three files into the victim’s machine: - Ecmd.exe: UserForm1 and UserForm2 contain two Base64 encoded payloads. Depending on the version of .Net framework installed on the victim’s machine, the content of UserForm1 (in case of .Net v3.5) or UserForm2 (other versions) is decoded and stored in “C:\ProgramData”. - cf.ini: The content of the “cf.ini” file is extracted from UserForm3 and is AES encrypted, which later on is decrypted by ecmd.exe. - ecmd.exe.lnk: This is a shortcut file for “ecmd.exe” and is created after Base64 decoding the content of UserForm4. This file is dropped in the Startup directory as a trigger and persistence mechanism. Ecmd.exe is not executed until after the machine reboots. Ecmd.exe is a .Net executable that pretends to be an ESET command line utility. The following images show the binary certificates, debugger and version information. The executable has been signed with an invalid certificate to mimic ESET, and its version information shows that this is an “ESET command line interface” tool (Figure 6-8). ecmd.exe is a small loader that decrypts and executes the AES encrypted cf.ini file mentioned earlier. It checks the country of the victim’s machine by making a HTTP post request to “http://ip-api.com/xml“. It then parses the XML response and extracts the country code. Figure 9: Getcon function: make http post request to “ip-api.com” Figure 10: ip-api.com output If the country code is “RU” or “US” it exits; otherwise it starts decrypting the content of “cf.ini” using a hard-coded key and IV pair. The decrypted content is copied to an allocated memory region and executed as a new thread using VirtualAlloc and CreateThread APIs. A Malleable C2 is a way for an attacker to blend in command and control traffic (beacons between victim and server) with the goal of avoiding detection. A custom profile can be created for each target. The shell code uses the Cobalt Strike Malleable C2 feature with a jquery Malleable C2 profile to download the second payload from “time.updateeset[.]com”. The shellcode first finds the address of ntdll.exe using PEB and then calls LoadLibrayExA to load Winint.dll. It then uses InternetOpenA, InternetConnectA, HttpOpenRequestA, InternetSetOptionA and HttpSendRequestA APIs to download the second payload. The API calls are resolved within two loops and then executed using a jump to the address of the resolved API call. The malicious payload is downloaded by InternetReadFile and is copied to an allocated memory region. Considering that communication is over HTTPS, Wireshark is not helpful to spot the malicious payload. Fiddler was not able to give us the payload either: Using Burp Suite proxy we were able to successfully verify and capture the correct payload downloaded from time.updateeset[.]com/jquery-3.3.1.slim.min.js. As can be seen in Figure 16, the payload is included in the jQuery script returned in the HTTP response: After copying the payload into a buffer in memory, the shellcode jumps to the start of the buffer and continues execution. This includes sending continuous beaconing requests to “time.updateeset[.]com/jquery-3.3.1.min.js” and waiting for the potential commands from the C2. Using Hollow Hunter we were able to extract the final payload which is Cobalt Strike from ecmd’s memory space. A precise attribution of this attack is a work in progress but here we provide some insights into who might be behind this attack. Our analysis showed that the attackers excluded Russia and the US. The former could be a false flag, while the latter may be an effort to avoid the attention of US malware analysts. As mentioned before, the domain hosting the remote template is registered in Hong Kong while the C2 domain “time.updateeset[.]com” was registered under the name of an Iranian company called Ehtesham Rayan on Feb 29, 2020. The company used to provide AV software and is seemingly closed now. However, these are not strong or reliable indicators for attribution. Figure 11: updateeset.com whois registration information In terms of TTPs used, Chinese APT groups such as Mustang Panda and APT41 are known to use jQuery and the Malleable C2 feature of Cobalt Strike. Specifically, the latest campaign of Mustang Panda has used the same Cobalt Strike feature with the same jQuery profile to download the final payload which is also Cobalt Strike. This is very similar to what we saw in this campaign, however the initial infection vector and first payload are different in our case. Anadia Waleed resume.doc Remote Template: indexa.dotm Remote Template Url: Cf.ini shell-code after decryption: Cobalt Strike downloaded shellcode: Cobalt Strike payload The post Multi-stage APT attack drops Cobalt Strike using Malleable C2 feature appeared first on Malwarebytes Labs.
OPCFW_CODE
We’re delighted to share DDEV-Local v0.20.0 with you. We’ve addressed a number of bug fixes and provided a few enhancements that will improve your overall experience. Additionally, with renewed energy and ideas coming as a result of our company onsite, we’ve been working away on the product vision and our Roadmap (more to come!) How do you sudo? One new feature we’ve added in v0.20.0 reflects how we’re approaching building DDEV to be something that is flexible enough to grow with you as your needs evolve. As of v.0.20.0, DDEV-Local allows a user to issue commands with sudo in the web container. Users often come with requests we have to weigh or consider in terms of our roadmap and how universal the feature is needed. For example, a TYPO3 user might want to install Ghostscript in the web container; or a GravCMS user might want to add SQLite to support TNTSearch. Now because you can issue privileged commands using DDEV, you can add whatever you need to make it work for you. This reflects the overall approach we’re taking with creating a product and service that is pluggable, plays well with others, and is extensible – while still having an out of the box “just works” experience for typical cases. With Apache support and services like Memcached ahead in our DDEV Roadmap, your continued feedback helps us guide the product in the right direction. Curious what’s in store when you upgrade to v.0.20? Read the full release notes for details. On July 17th, we will be officially tagging a v1.0.0 release for DDEV. We’re so proud of how far the project has come and how many people are reliant upon it as their primary tool of choice. Cutting this release is a reflection of that progress. As part of this release, we will be focusing on some last refactors as well as improving our documentation on our robust Windows support (Pro, Home, and Enterprise!) Events and workshops! We’re at some great events coming soon. WPCampus, Drupal Asheville, and more coming soon. Come by say hi, we love to talk to DDEV users. Would you like to refer someone on for training? Tech trainer Mike Anello of Drupal Easy has some exciting news to share soon. Sign up to find out as soon as we announce the DDEV online workshops. We were wowed to see over 80 people went to a DDEV demo and workshop at TYPO3 Dev Days last weekend! Thank you to all who attended, and a special thank you to speakers and facilitators, Michael Oehlhof and Jigal van Hemert. — T3DD (@t3dd) June 23, 2018 Some recent posts we think you’ll like - We had our very first on-site last week. It was great to bring the team together, and align with our objectives for the next quarter and beyond. - Know someone considering a switch from virtual machines? Send them here: Docker containers vs VMs for quick consistent local dev - Read: Why DevOps, Containers, and Tooling Matter in Digital Transformation. We can help you set up teams, processes, and integrations to help build dev-to-deploy workflows. Reply to this email if you have any questions. Thanks as always for your support and enthusiasm!
OPCFW_CODE
I've read a few threads on this but haven't found a proper answer. I've just built a new system with all new parts, except the slightly used 2 x Asus GTX480's. I attached the SLI bridge and the card in the 1-slot works but Windows has stopped the 2-slot with a code 43. I'm going to swap the slots over later in the week when I'm next home but any ideas in the meantime? My new rig is: Coolermaster HAF X Coolermaster silent pro gold 1000W Asus P8Z68 V-Pro i7-2600k @ 3.4GHz OCZ Vertex 3 Max IOPS 120GB SSD (system) WD 450GB Velociraptor 8GB Corsair Vengeance CL8 Arctic Cooler Pro (can't remember the exact model, as I rushed out and bought it on the weekend when I realised my Zalman had been shipped minus the 1155 brackets) 2 x Asus GTX480 SLI By the way, the PSU failed after an hour and I've ordered another but that's for another thread. With PSU's there are so many out there and most of them are junk. The one's you want are Corsair,Antec,Seasonic, and XFX brands with 80 plus on them as well. I don't know if this will help but here is a link to code 43. The second link is to microsoft your code 43 should be in there. Mmm, I still have my Corsair 750TX (tier 2) but it's not quite powerful enough for the new rig (it needs 850W minimum). It's been faultless though. I'll put my faith in the tier 3 PSU for now. Thanks again. It seems the the code 43 can be related to a lot of different things. As to your tier 3 PSU it states that it will handle what it is suppose to do, they are just showing you to be carefull and since you have had good luck with it before I don't see that it would be any different than before. Good luck to you. It turns out there was nothing wrong with the PSU. The system only fails to start when I hook up the second graphics card. It must be tied in with the code 43 error but I haven't had time to test it further. It's not the slot or the card. It's only having both cards hooked up that does it. I've read some advice that seems to have worked for others - uninstall all nVidia drivers, power off, hook up both cards, power on & install only the graphics drivers to start witter (not the PhysX stuff). I can't remember the next part but I'll do some more research and test again.
OPCFW_CODE
sh and grep numbers only I'm facing a problem and trying to find a solution that works in sh. If I could use bash this code would work: ls /a|grep ^[0-9] Unfortunately this is not the case with sh, and yes I need to use sh. :) Running in sh i get ls /a |grep ^[0-9] [0-9]: not found Usage: grep -hblcnsviw pattern file . . . If I remove the ^ the code works but I need only the files that start with numbers and not the ones that contains numbers. Example, I need the files that are like: 12.00.2 2.222.1234.12 from the grep man page I should be able to use ^. For the time being my implementation was done by using: ls /a|grep -v [a-z]|grep -v [A-Z] As this will remove all the files that contains chars, but still if a file is .123.33 it will show up. Can you use egrep instead of grep? tip: grep -v [a-z]|grep -v [A-Z] == grep -v "[a-z]\|[A-Z]" it wouldn't work well, I tried with egrep and no luck, but the solution provided by slm works like a charm :) @rush, that syntax is not portable and will not work in the OP's Solaris. grep -v '[a-zA-Z]' or grep -ve '[a-z]' -e '[A-Z]' or grep -vE '[a-z]|[A-Z]' are standard though (though the behaviour is locale dependant). Something like this should work for you: ls /a | egrep "^[0-9]" Per @Anthons feedback egrep is deprecated, so you can use -E as a switch to the normal grep command instead: ls /a | grep -E "^[0-9]" Per @Stephane's feedback the extended regular expressions (ERE's) aren't even necessary in this situation. Really what matters is the quoting of the ^[0-9] to protect it from being interpreted by whatever version of /bin/sh you're using that's having the issue. So something like this would be the simplest fix to your issue: ls /a | grep "^[0-9]" -or- ls /a | grep '^[0-9]' egrep is deprecated (according to the man page), should recommend the use of grep -E so I guess that bash interprets the ^ while when using sh it is up to grep, correct? and thanks a lot. Yes, bash is interpreting those instead of grep. I was surprised that you could pass those arguments bare like that w/o double quotes. I've only ever double quoted them to grep. You could've also put single quotes around them too. You don't need extended REs here. That syntax works as well with BREs as with EREs so the e or -E are superfluous. By sh, I think you're refering to the Bourne shell which was the shell of most Unix systems before the mid 90s and was /bin/sh on Solaris prior to Solaris 11. On Solaris 10 and older, don't use /bin/sh. That shell is from another era. Use /usr/xpg4/bin/sh instead. In the Bourne shell ^ is an alias for | for compatibility with its predecessor the Thompson shell. So your command is like: ls /a|grep |[0-9] And the Bourne shell reports that it can't find a command called [0-9] and grep complains about not getting any argument. Even if using a standard sh as opposed to the Bourne shell, I would recommend that you quote ^. For instance ^ is a globbing operator in zsh when the extendedglob option is enabled. In any case, if not ^, you have to quote [0-9] since those are globbing operators. [0-9] would be expanded by the shell to the list of files in the current directory whose name is a single digit. So: ls /a | grep '^[0-9]' Incidentally, in the Bourne shell ls /a ^ grep '^[0-9]' would also work. You don't need ls and grep for this; you can use a simple glob /a/[0-9]*: echo /a/[0-9]* ls /a/[0-9]* grep foo /a/[0-9]* If you're using this in a script, beware that parsing ls output is a bad idea. I agree with l0b0 grep is a bad idea here but anyway, here is an explanation of the issue and a workaround. On Solaris 10 and older, /bin/sh is an antiquated shell that shouldn't be used for anything but running legacy scripts. You really should use ksh, bash or /usr/xpg4/bin/sh instead. The root cause here is ^ used to be the original way to specify a pipe in the early Unix times. Solaris /bin/sh inherited this archaelogical feature. The workaround is then quite simple, just escape the caret one of these ways: ls /a |grep \^[0-9] or ls /a |grep "^[0-9]" or ls /a |grep '^[0-9]' The first one would fail if there were a ^0 and ^1 file in the current directory though. You beat me by 7 seconds on the issue's Bourne source ;) @StephaneChazelas Why would the first one fail ? I just checked and it worked just fine. Because the shell would expand it to ls /a | grep '^0' '^1'. That is, it would search for lines starting with 0 in ^1 and ignore the output of ls. @StephaneChazelas Got it, I overlook the "and". Thanks.
STACK_EXCHANGE
Well now, I had to do a double take after blindly opening Lobsters and seeing my own blog post on the front page! Hopefully others can get some use out of this feature since I find it pretty nifty :) Is there any way to use environment variables in the condition? I keep my global git config in a git repo and I’d love to have a mechanism for conditionally including machine-specific overrides to some of the settings. It doesn’t look like Git’s conditional includes feature supports reading environment variables – it only supports these keywords: However, the Environment section of the configuration docs lists some environment variables that you could set on specific machines to change Git’s configuration. Of those environment variables, GIT_CONFIG_GLOBAL seems the easiest to use. You could use it by putting these files in your config repo: Within each machine-specific config, use a regular (non-conditional) include to include shared_config: path = shared_config ; Then write machine-specific configuration: email = firstname.lastname@example.org Finally, on each of your machines, set the environment variable GIT_CONFIG_GLOBAL to that machine’s config file within your config repo. If you want some machines to just use shared_config without further configuration, name that file config instead and make sure your config repo is located at ~/.config/git/. On those machines, you don’t need to set GIT_CONFIG_GLOBAL. This will work because $XDG_CONFIG_HOME/git/config is one of Git’s default config paths. Hmm, not that I know of off the top of my head but I’ve never actually sat down and read the Git documentation so I’d be surprised. You could perhaps look into templating your gitconfig using something like chezmoi? There’s always nix which comes up too but that’s quite a bit overkill just for fiddling with some dotfiles of course. I can work around it now by generating the file locally, it’s just been a mild source of annoyance for me that I need external support for this. Ah, a note for anyone trying this. The original version was missing a trailing slash on the end of the includeIf directive. I’ve just pushed a fix for the typo but if you copied it earlier and were having trouble, just a heads up. I’ve become a big fan of direnv for this kind of stuff, being able to just write shel scripts that activate smoothly in all tools I use is a godsend use flakes is the chef’s kiss That’s definitely sounds better than using local per-repo config for the email. I have definitely leaked a personal address into a work repo a couple times due to forgetting to configure the repo with the work email. I once even forced myself to do so via global Git hook that checked whether I set the proper user info: if ! git config --get user.email >/dev/null printf "No identity set, set one using 'git identity <name>'" \ | cowsay -f dragon-and-cow -W 60 git-identity is script that allowed me to quickly set the identity for given repo. I’ve gone the opposite of just not setting a global username/email. I don’t have much to hide but it’s better to just try and keep stuff isolated for me
OPCFW_CODE
Adjusting the Linux (Xfce) Volume Control Posted on Sunday, February 20, 2022 by TheBlackzone Like many others I like to listen to music while sitting at my computer. When I am coding or doing other work I need to concentrate on, I do so at a very low volume level, just enough that the music is audible without distracting me. For this reason I use the volume controls of my keyboard quite often to adjust the level according to the current situation. Normally the volume keys increase or decrease the volume level by 5%, which in most cases is a reasonable setting. But combined with my DELL AC511 sound bar, a step of 5% sometimes goes from “mute” directly to “too loud”, forcing me to use the mouse to make adjustments with the volume slider control. I was looking for some way to have a more fine-grained control over the amount of volume change when using the keyboard and came up with two solutions to this “problem”. The first one is specific to the Xfce desktop environment with the “xfce4-pulseaudio-plugin”, which is what I am using, the second one is more generic and works in Xfce as well as in other Linux desktop environments. Let’s get started… Solution for the “Xfce4-pulseaudio-plugin” Unfortunately the “xfce4-pulseaudio-plugin” has no user accessible way to adjust the amount of volume change using to the volume control keys and the official documentation contains no hint on how to change it. So I grabbed the “xfce4-pulseaudio-plugin” source code and quickly flicked through it to see if there is any way to change it without changing the source code itself and compiling the plugin myself. And in fact, the plugin is querying for a global configuration value named “volume-step”. With this knowledge it becomes easy: Open the Xfce Settings Manager and go to the “xfce4-panel” settings. There look through the “plugins” and find the “pulseaudio” plugin (here it is “plugin-9”): There create a new integer value named “volume-step” and set it to the desired value, which is in my case “2” (percent). The same can be done from the shell with the following commands. First find the plugin number of the “xfce-pluseaudio-plugin”: xfconf-query -c xfce4-panel -lv | grep pulseaudio Which in my case yields the result Then create a new property for the plugin named “volume-step” of type “integer” with the required value, which again is “2” (percent) in my case: xfconf-query -c xfce4-panel -p /plugins/plugin-9/volume-step --create -t int -s 2 Now restart the Xfce4 session (or reboot your computer) to activate the new setting. Another, more generic solution which is also applicable for other Linux desktop environments, is to use amixer command and bind it to your keyboard’s volume control keys. In the case of Xfce this is done from the keyboard settings in the “Application Shortcuts” tab. There create new bindings for the following commands, each assigned to the appropriate volume control key: For “Volume up”: amixer -q -D pulse sset Master 2%+ For “Volume down”: amixer -q -D pulse sset Master 2%- And, if necessary, for “Toggle mute”: amixer -q -D pulse sset Master toggle I went for the amixer solution first before I thought “I can’t believe that there is no setting for that in Xfce.” and made the effort to check the source code of the “xfce-pulseaudio-plugin” and finally found the “hidden” option there. But because of this curiosity I have yet again spent too much time on figuring out how something obviously easy can be solved. So I put it here in the hope that it will save you some time if you are looking to achieve something similar… Tags: computer, linux
OPCFW_CODE
If you're watching this video, you've probably seen the other videos that prove it is possible to upgrade from Windows 1.0 all the way to Windows 8. However, those videos use VMware to perform the upgrades on a virtual machine (a guest operating system running within a host operating system). This video proves that it is possible to upgrade from Windows 1.0 to Windows 8 on actual hardware, without the use of a virtual machine. The computer I'm using is an Asus EEE PC 1005HA, which is a netbook. The hardware is not exactly the same as it was when I received the computer, as I have replaced the screen, the RAM, and the hard drive. Here is the upgrade path I followed: Windows 1.01 Windows 2.03 Windows 3.0 Windows 3.1 Windows 95 Windows 98 Windows Me Windows XP Windows Vista Windows 7 Windows 8.1 Preview In Windows 2.03, you can see I deleted some files. That is because for some reason, those files prevent Windows 3.0 and Windows 3.1 from being installed. In Windows 3.1, I had to rename the WIN.COM file in order to install Windows 95B. I installed 95B as opposed to regular 95, because 95B supports FAT32 file systems, while regular 95 does not. If I had installed regular 95, I would have had to use a FAT file system, for which the maximum size is 2 gigabytes. I didn't want to have to deal with converting the drive to FAT32 and extending it, so I stuck with FAT32 from the beginning. In Windows XP, I had to convert the drive to NTFS in order to install Windows Vista. I also changed the RAM from a 256 megabyte chip to a 2 gigabyte chip. This was necessary because the older versions of Windows such as 95, 98, and Me have a limit to how much RAM they can take, while newer versions of Windows require more RAM. Anyway, even though the video is sped up 4X, it's still quite long. Sorry about the poor video quality - I don't have a very good camera. And there's no audio (I decided that there's no music I can put on that everyone would like, and if anyone is actually going to even watch this entire video, they can just put on their own music). So if you're going to watch the whole thing, I suggest you make some popcorn, put on your favorite music, sit back, relax, and enjoy the video!
OPCFW_CODE
Currently there is no notification when a ride/run/activity is flagged – I just happened to be checking on a ride I did earlier in the week and noticed that it had been flagged by a user incorrectly (it wasn't an auto-flag as it only was flagged at some point after the day of upload). I know that in the past I have flagged other's rides weeks or months or even years after they were completed, and if I were that person I'd want to know when that happened so I could address it (cropping, making private, etc). There is also no way to view which of your activities are flagged; the Strava blog does mention that a flagged ride would appear as red in the "My Activities" section, but that function is buggy or broken because this ride was not identified in the list view (only when clicking into the activity could I see the flag). One of those two features is a must have; either a notification when your activity is flagged by a user or giving users a way to filter for their flagged rides in the "My Activities" section – as a long time user, I have close to 3000 activities and now I'm paranoid that I have some hidden flagged rides that I need to take care of buried somewhere in my long list of activities. But without one of these it's infeasible for me to stay on top of any flags in my activity history (or any new flags that come up). I'm sure the filter in the "My Activities" section would be easier to implement, so I'd recommend that to start. @erbiker - I totally agree. A few months ago, I happened to look up an activity of mine because I was looking at a specific segment and wondering why I was not shown on it when I knew I had completed it before. I found that the activity had been flagged incorrectly. I didn't get any notification and there was no indication it was flagged when I looked up my activity log like Strava said there would be. I did more digging and found 7 or 8 other activities that had also been incorrectly flagged and got those fixed. I submitted a ticket to Strava and they claim there are no others, but I have no trust that they are accurate. If they are using the same software to see if there are flags as they use to supposedly show us in our activity log, then they wouldn't see them either. Strava definitely needs to fix or improve their system with regards to notifying users if/when their activities are flagged and making it obvious to them so they can find them if they want to. Thanks for surfacing the functionality to view flagged activities in the "My Activities" section. I brought this up with our engineering team and the functionality was, indeed, broken. We have now fixed this and flagged activities appear marked in red in the "My Activities" section on the website, as intended. Furthermore, notifications that your activity was flagged should populate as pull notifications on the website as well as on mobile. If you're not receiving them, please submit a support ticket so we can look into this. For what it's worth, I've had a couple of rides flagged incorrectly, the last time being about 2 years ago. I don't know who did the flagging or why so I don't know if it was accidental or malicious. On each occasion I got a notification about it. The support desk were very helpful in removing the flags. I understand that flagging should be anonymous but it might be helpful if the person being flagged gets to see the comments made by the flagger (I think such comments are mandatory). The athlete could then perhaps comment on their own activity to help resolve any misunderstandings. You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
OPCFW_CODE
Siim Tiilen, a Quality Engineer in our DevOps team, explains practically how Veriff shares GPUs between pods and how it's helped us reduce our overall infrastructure cost. Siim Tiilen, May 24th, 2021 ShareLove this blog? Why not share it with the world? Due to the ever increasing importance of AI in our stack, we are extensively using graphics processing units (GPUs) in Kubernetes to run various machine learning (ML) workloads. In this blog, I’ll describe how we have been sharing GPUs between pods for the last 2 years to massively reduce our infrastructure cost. When using NVIDIA GPU's, you need to use the NVIDIA device plugin for Kubernetes that declares a new custom resource, nvidia.com/gpu , which you can use to assign GPUs to Kubernetes pods. The issue with this approach is that you cannot split them between multiple applications (PODs) and GPU's are a very expensive resource. So what happens when you deploy a GPU using applications like this? Kubernetes is using the nvidia.com/gpu resource to deploy this pod into a node where a GPU is available. If you connect (ssh) into the node where this pod is running, and use nvidia-smi command there, you can get a result similar to this. The info visible here is that our node has 1 NVIDIA Tesla T4 GPU with 15109MiB of memory and we are using 104MiB of that with one process (our deployed pod). Internally there is one very important variable that is given to each pod and the application is using this to know which GPUs to use. kubectl exec -it pod/gpu-example -- bash #and check echo $NVIDIA_VISIBLE_DEVICES In the example above, this GPU - GPU-93955ff6-1bbe-3f6d-8d58-a2104edb62db - is being used by the application. When you have a node with multiple GPUs, then actually all pods can access all GPUs - but they all are using this variable to know which GPU they should access. This variable also has one "magical" value: all. Using this you can override GPU allocation and let your application know that it can use any (all) GPUs present in your node. We know that our example application is using 104MiB and the GPU has a total 15109MiB, so in theory we can fit it 145 times into the same GPU. In the next example you need to have AWS EKS cluster with 1 GPU instance (g4dn.xlarge), and with some modifications it should be possible to use it in any Kubernetes cluster with Nvidia GPU nodes available. If we check after that we can see that all 5 pods are assigned to the same node. And when we check nvidia-smi in this node we can see there are 5 GPU processes running from those pods. Now when we know that multiple GPU-using pods can run in the same node, we need to make sure they are split between nodes based on the node's capacity to handle the pod’s requirements. For this we can use Custom Resources to let all GPU nodes know that they have so much GPU memory available to use. We will be using DaemonSet to add custom GPU memory resource (we named it veriff.com/gpu-memory) to add nodes with an NVIDIA GPU attached. DaemonSet is a special type of Kubernetes deployment that will run 1 pod in some (or all) cluster nodes, it is quite often used to deploy things like log collectors and monitoring tooling. Our DaemonSet will have nodeAffinity to make sure it will only run in g4dn.xlarge nodes. If you check your node using “kubectl describe node/NAME” you can see it has veriff.com/gpu-memory resource available. Now when we know that our gpu memory adding script works, let's scale our cluster up so we have multiple GPU nodes available. In order to test it out, we will modify our deployment to use this new resource. After that it is visible that pods are split between nodes. With regular resources like CPU and memory, kubernetes will know how much pods (containers) are actually using and if anyone tries to use more - kubernetes will restrict it. However with our new custom resource, there is no actual safeguard in place that some “evil” pod can’t take more GPU memory than declared. So you need to be very careful when setting the resources there. When pods try to use more GPU memory than a node has available, it usually results in some very ugly crashes. In Veriff, we resolved this issue with extensive monitoring and alerting for our new GPU Custom Resources usage. We have also developed tooling in-house to measure ML applications GPU usage under load. All code examples from this post can be found in https://github.com/Veriff/gpu-sharing-examples. We delve into how Veriff created our custom Flutter plugin for our different mobile products which we offer our clients. Our Technical Support Team Lead, Ott Ristikivi, delves into what the role of 'Technical Support' actually means within Veriff, what big challenges they face daily, and why it's probably not what you expect. Nicholas Vandrey, Veriff's Business Intelligence Team Lead, writes his first blog in a series taking a deep dive into how businesses currently use data, and how to make sure you're doing it right.
OPCFW_CODE
It falls throughout the discipline of computer science, equally dependant upon and affecting arithmetic, software package engineering, and linguistics. It is actually an Energetic research place, with numerous dedicated educational journals. Γ ⊢ x : Int displaystyle Gamma vdash x: text Int While salaries for many roles differ greatly by location, field, experience stage, desire and often given that the wind blows, this record really should provide you with a tough concept of the more economically rewarding IT-connected roles. You could check with our professional via chat if you select our computer science assignment help. By doing this you can also make your assignment specifications clear to them and relax as our professionals produce your assignments. Naturally there are numerous, many extra trouble fixing tactics available. I just named some of my favorites, and the most typical ones used in computer science. You can acquire much more. Some day you’ll deal with a dilemma which you could’t tackle. You’ll find out how to solve it, and it doing this, you’ll replicate on the way you solved it, therefore you’ll recognize, “Hey, I just discovered a brand new approach to a dilemma! I had been skeptical in the beginning. This was my first time applying this kind of support. Was extensively amazed through the customer service. They can reply to you working day of, Otherwise a couple of minutes to hrs. I had been capable of fork out 50 % very first and 50 % later on. Really set my brain at simplicity. Will definitely be employing once more. Thanks! Our key aim is to supply support to students who want being familiar with about computer science together with help you contend the assignments specified to you personally. I loved heaps of these Positive aspects, anyone that is using This website for The 1st time, I assure them that you will hardly ever Select almost every other medium once you get in touch Using the academics below. This position commonly demands a technical history and potential customers a technological staff, which could encompass developers, testers, analysts and more – whether the Business is specialized. Common obligations / abilities: oversee the technical elements of inner projects; preserve corporate IT processes, with documentation; seek the services of and guide a technical staff to assist the techniques; handle assets in a spending budget; continue to keep up-to-date with new systems, for recommending attainable inner updates; connect address with various departments, vendors and possibly consultants /contractors. The position can require a grasp’s degree in computer science or possibly a similar industry. Computer science departments having a arithmetic emphasis and using a numerical orientation take into consideration alignment with computational science. Both of those sorts of departments often make attempts to bridge the sphere educationally if not across all investigate. Philosophy It’s Okay in case you don’t understand the explanation, it’s lots of discrete arithmetic which you’ll understand someday. Just take into account that simplifying the condition in my circumstance helped me comprehend the query much more coherently mainly because I took away some disorders, solved A neater difficulty, and you could check here see if that helped me remedy the greater difficult Model of it. This website page lists OCW classes from just one of in excess of thirty MIT departments. MIT OpenCourseWare can be a totally free & open publication of fabric from 1000s of MIT programs, masking the complete MIT curriculum. Computer science specials with mechanisms which could feed a computerized method with Distinctive Directions in order to operate. Which means precision is highly essential In this particular discipline. Defective units are mainly resulting from mistakes in the system, And that's why college students that are faced with computer science assignments discover it complicated. "Inside of over 70 chapters, Each one new or significantly revised, one can discover any form of knowledge and references about computer science one can consider. Wishful Wondering: Often I do think backwards and start with the answer. I contemplate this elusive variable which is the solution, and manipulate it to suit click here to read the conditions of the condition.
OPCFW_CODE
<?php namespace Api\Filter; class filter { const METRIC_AREA_TYPE = 'areaType'; // Area type as string (Api\Filter\areaType) const METRIC_AREA_NAME = 'areaName'; // Area name as string const METRIC_AREA_CODE = 'areaCode'; // Area Code as string const METRIC_DATE = 'date'; // Date as string static $filters = []; /** * addFilter function * Filter is a required parameter. * @param string $metric * @param string $value * @return array */ public function addFilter(string $metric, string $value) { return self::$filters[$metric] = $value; } /** * getFilter function * Checks if the given metric exists in the filter array and returns the value on success, returns false on failure. * @param string $metric * @return mixed on success * @return false on failure */ public function getFilter(string $metric) { if(array_key_exists($metric, self::$filters)) { return self::$filters[$metric]; } return false; } /** * getFilterString function * * @return string */ public static function getFilterString() { if(!empty(self::$filters)){ return 'filters='.http_build_query(self::$filters,'',';'); } throw new \Exception('There are no filters currently assigned.'); } }
STACK_EDU
Customerrors Mode Off On Remote Only Please check the support article "Firefox crashes - Troubleshoot, prevent and get help fixing crashes" (especially the last section) for steps to get those crash report IDs, and then post some CAUSE When .NETerrors occur and the system isnot configured to display full errors, the following error will be seenin the client. Copy A tutorial ASP.NETfor C# and custom errors http://www.asp.net/web-forms/tutorials/deployment/displaying-a-custom-error-page-cs. Join them; it only takes a minute: Sign up CustomErrors mode=“Off” up vote 168 down vote favorite 39 I get an error everytime I upload my webapp to the provider. A user credential token is stored in a cookie. "Passport" Authentication is performed via a centralized authentication service provided by Microsoft that offers a single logon and core profile services for FranB7077 Posted 9/15/14, 10:30 PM Helpful Reply jscher - wow. https://msdn.microsoft.com/en-us/library/h0hfz6fc(v=vs.85).aspx Custom Error Mode Off In Web Config Arab vs. Configuration File Customisation of error page can be implemented by adding a value for an attribute defaultRedirect in the <customErrors> tag of the configuration file web.config. Anonymous access must be disabled in IIS. "Forms" You provide a custom form (Web page) for users to enter their credentials, and then you authenticate them in your application. In other words, it has to take the entire Xml file as a whole -- but if it encounters bad Xml it can't build the required object at all! –Cyberherbalist May Browse other questions tagged asp.net web-config webserver custom-errors or ask your own question. While viewing a page on the site, try either: right-click and choose View Page Info > Security > "View Cookies" (menu bar) Tools > Page Info > Security > "View Cookies" we are unable to process your request at this time". Customerrors Mode= Off / Not Working The page works locally with no errors or warnings. Your provider may also have prevented custom errors from being displayed at all, by either overriding it in their machine.config, or setting the retail attribute to true (http://msdn.microsoft.com/en-us/library/ms228298(VS.80).aspx). Customerrors Mode= On Not Working I don't do a lot of ASP.NET development, but I remember the custom errors thing has a setting for only displaying full error text on the server, as a security measure. I then used IIS manager, properties, ASP.Net Tab, Edit configuration, then chose the CustomeErrors tab. http://stackoverflow.com/questions/101693/customerrors-mode-off This is the default value. The above example thus shows that, whether it is local or remote access, ASP.NET error page is shown. Customerrors Mode= On Not Working Regards Thomas Sonork id: 100.10453 Thömmi Disclaimer:Because of heavy processing requirements, we are currently using some of your unused brain capacity for backup processing. You probably have the applicationpool set to another version of the .NET framework than the application is written in. Do American foods contain unsafe levels of glyphosates Is X+X finitely representable in X? his comment is here system.web Specifies the root element for the ASP.NET configuration settings in a configuration file and contains configuration elements that configure ASP.NET Web applications and control how the applications behave. The web.config contains this line: General FAQ Ask a Question Bugs and Suggestions Article Help Forum Site Map Advertise with us About our Advertising Employment Opportunities About Us Articles » Web Development » ASP.NET » General This documentation is archived and is not being maintained. The behavior changed in 5 so that it doesn't handle exceptions if you turn off custom errors. I did what the article says but i have a httpModules tag in my web.config. Customerrors Defaultredirect In the event handler for application-level error, a log named "ErrorSample" is created if it does not exist in the Event Log. For example, the contents of the user-defined error page error.htm can be given as follows: Error.htmWe are very sorry for the inconvenience caused to you... In ASP.NET 1.1, we see the detailed error only when running the browser on the same machine as the web server. Education or employment: What is a post-doc? http://swirlvision.com/customerrors-mode/customerrors-mode-on-off.html jvanrhyn is also right about the one On Mode In this scenario, set the mode attribute value to "On" as shown below: Web.Config File Why do I need HCl? If I were to guess, you have the customErrors node inside a share|improve this answer answered Jan 20 '14 at 0:16 Martin Zikmund 3,78921648 add a comment| up vote -1 down vote Set the mode to On, save the web.config then refresh or MORE INFORMATION Once the error is visible,the link below may help determine the issue: http://www.mailenable.com/kb/search.asp?catalog=mailenable-KB&ST=FULL&query=.net+error Product:MailEnable (Ent-Any Ent-1.X Ent-2.X) Category:Environment Article:ME020460 Module:General Keywords:defaultRedirect,web.config,error,.net,display,attribute,How,to,modify,the,"defaultRedirect" Class:TIP: Product Tip Revised:Wednesday, May 4, 2016 Author:MailEnable Browse other questions tagged asp.net asp.net-mvc asp.net-mvc-3 web-config or ask your own question. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). cheers, Donsw My Recent Article : CDC - Change Data Capture Sign In·ViewThread·Permalink Can i put Response.Redirect("errorpage.aspx") in Application_Error()? Remote requests will first check the configuration settings for the custom error page or finally show an IIS error. Portions of this content are ©1998–2016 by individual mozilla.org contributors. Change the value to "false"
OPCFW_CODE
feat(proto): add @opentelemetry/proto package with hand-rolled transformation related: #2665 This is an alternative to #2691 which includes only the types and hand-rolled transformation functions to convert traces and metrics into otlp json. Skipping protobufjs drastically reduces the complexity and size of the final binary, which will be important in the browser context. It also includes a descriptor.json file in the root of the package which can be loaded by protobufjs and used to serialize messages into protobuf. todo: [ ] add an option to export traces in a protobuf-compatible way (the JSON representation requires trace and span IDs to be in hex, instead of the base64 format expected by the library) [ ] update README to reflect the changes Hi, sorry for disturbing, are there any plans to merge this in near future? I am concerned about #2675 Hi, sorry for disturbing, are there any plans to merge this in near future? I am concerned about #2675 This currently on hold waiting on #2775 I will update the proto in the existing implementations to get #2675 unblocked so we don't need to rush this one. Converting to draft while we wait on #2775 I think we have 2 final questions to resolve before merging this @open-telemetry/javascript-maintainers @open-telemetry/javascript-approvers What should this package be named? Right now I have it as @opentelemetry/proto but it actually doesn't return protobuf, only an object which uses the OTLP types. Maybe @opentelemetry/otlp-transformer? I'm open to suggestions. I don't expect this package to be used directly by users so a short friendly name isn't probably important. How can it be stabilized? The return type of each serialization function depends on the field names in the proto which are unstable. For example, createExportTraceServiceRequest returns IExportTraceServiceRequest which uses the name instrumentationLibrary. That name is currently being considered to be changed to InstrumentationScope in https://github.com/open-telemetry/opentelemetry-proto/pull/362#pullrequestreview-893124807 Marking as ready for review again as it is higher priority due to #2804 What should this package be named? I agree that it shouldn't refer to proto directly and more to the protocol itself so i'm fine with @opentelemetry/otlp-transformer, maybe use otlp-json-encoder since it only encode to json for now ? How can it be stabilized? I'm not really sure but could we just have optional option that specify which proto version we encode to ? Once it get merged we could add a if that changes the generated representation ? What should this package be named? I agree that it shouldn't refer to proto directly and more to the protocol itself so i'm fine with @opentelemetry/otlp-transformer, maybe use otlp-json-encoder since it only encode to json for now ? How can it be stabilized? I'm not really sure but could we just have optional option that specify which proto version we encode to ? Once it get merged we could add a if that changes the generated representation ? I would prefer not to do that if possible. I don't want to commit to maintaining all proto versions in parallel forever. How can it be stabilized? It sounds to me that the problem here is similar to the JSON representation of the OTLP protocol since the names of the fields/types have to be part of the stabilized API. Are there any roadmaps on the stabilization of JSON representation? What should this package be named? @opentelemetry/otlp-transformer or just @opentelemetry/otlp? Both sound cool to me. Marking as ready for review again as it is higher priority due to https://github.com/open-telemetry/opentelemetry-js/issues/2804 CI failure could be fixed by https://github.com/open-telemetry/opentelemetry-js/pull/2768 I would prefer not to do that if possible. I don't want to commit to maintaining all proto versions in parallel forever. I don't personaly think another way to do this expect waiting for stabilization :/ I was thinking we could document in the tsdocs explicitly that we are generating protos of a certain version and that future minor versions may generate another version. We could also independently version the package because we know it will need to be versioned each time something like this happens We could also independently version the package because we know it will need to be versioned each time something like this happens This sounds very similar to the experimental packages in the repo to me. This sounds very similar to the experimental packages in the repo to me. In the past we've had a requirement that stable packages don't depend on experimental packages, but maybe we can make a documented exception here in order to avoid leaking the implementation details of the OTLP transformation? One other option would be to make this package a private package and bundle it as a bundled dependency in the OTLP exporters https://docs.npmjs.com/cli/v8/configuring-npm/package-json#bundleddependencies. This might be a way to share this code without deploying it to NPM. One other option would be to make this package a private package and bundle it as a bundled dependency in the OTLP exporters docs.npmjs.com/cli/v8/configuring-npm/package-json#bundleddependencies. This might be a way to share this code without deploying it to NPM. I find it better to do this for now, after its stabilize we can publish the package so downstream user can use it (see https://github.com/open-telemetry/opentelemetry-proto/issues/365) Do you know if it is a problem to have a bundled dependency that doesn't exist on npm? I have few packages at my work that bundle deps without issues, typescript can do this. You just need to have the dependencies of the proto inside the package that uses it Looks like support goes back to at least NPM version 6 https://docs.npmjs.com/cli/v6/configuring-npm/package-json#bundleddependencies From what I can tell, the Node.js bundled npm version should already >= 6 in all LTS lines: https://nodejs.org/dist/index.json. I'd +1 if the bundledependencies works for us to release stable non-json OTLP exporters. Looks like support goes back to at least NPM version 6 docs.npmjs.com/cli/v6/configuring-npm/package-json#bundleddependencies From what I can tell, the Node.js bundled npm version should already >= 6 in all LTS lines: nodejs.org/dist/index.json. I'd +1 if the bundledependencies works for us to release stable non-json OTLP exporters. I'd like to release it once as unstable first for such a big change WDYT Sorry for force push i had the wrong email in the history The CI failures for sdk-node resource detectors should have been fixed by #2844. Browser test failure seems unrelated Browser test failure seems unrelated @dyladan I've created an issue to track that https://github.com/open-telemetry/opentelemetry-js/issues/2852
GITHUB_ARCHIVE
Last night, Dan, cjd, Rainfly_x, and myself had a nice long discussion about paid cjdns access, trust, and gateways. This expands upon Rainfly_x’s ideas here and provides a lot of ideas at a higher level than my previous post here. One of the first things to come up was: how should we determine trust? Before discussing how to implement trust, we must first determine the parameters that must be met by this system. We came up with the following: - Anyone can gain trust: any node on the network can gain a high level of trust by doing all the right things and not being evil - Trust must be easy to lose: if a node with a high level of trust consistently messes up, or does not follow the rule “don’t be evil,” then they must quickly and easily lose the trust that they had - It must be as automated as possible: we do not want subjective human opinions getting in the way of our carefully generated trust network - The trust of any node should be the same throughout the network: this one may not be as im Now, on to the methods we discussed to establish trust: This is very similar to what I proposed in my previous blog post. Essentially, trust between two nodes would grow with time, as long as they remained successfully peered and payment and service continued on time and with little interruption. The trust between each two nodes would then, in some way, propagate through the rest of the network. Inspired by ideas set forth in Rainfly_x’s original blog post, link quality would determine the trust of a node by its routes, latency, and bandwidth. Nodes with better values for these metrics would have better trust throughout the network. Of course, then we run into the issue of how to run these link quality tests. The only solution we came up with was to implement a two layer system, where the top layer is humans deciding which link quality testers they would trust, and then the system automatically determining the trust of nodes from there. Unfortunately, this does break our automated rule, so we are open to suggestions on how to fix that. Cjd proposed this interesting idea: why not have trust originate from the “core network” of hyperboria, and then flow outwards from there. We already have what many would consider to be the “core” of the network; that is, nodes such as seanode, forida.noble, fremont.noble (assuming derp doesn’t shut it down), and others. This “core” could probably be determined by making a network map of hyperboria from multiple perspectives, and then determine which nodes have a high number of peers and are most in the center of the network. As it is now, the core is fairly obvious if you look at a generated map (here or here). This may or may not run into the same problems as link quality, with whose map to trust. If this idea goes anywhere I will probably write a blog post on it some time in the future. Combining ideas from all of the previous, uptime would determine trust by the uptime of a node or link. It would be determined similarly to “Core Network” or link quality, and be similar to time peered. This is probably the most simple of all the ideas that came up, and I do not think it requires any more explanation. The next thing I want to bring up from our discussion has to do with how to process payment. This still follows many of the ideas that Rainfly_x set forth in his first blog post. We need a method of payment that supports offline payments, to help first time users get onto the network, does not have high transaction fees (if any at all), and does not require a central authority. Because of the problem with transaction fees, bitcoin probably should not be used for the everyday transactions. This leaves us to use either Open Transactions or something totally new. Open Transactions (OT) I have not read up on OT as much as I maybe should have so far, but Dan seems to be a large supporter of it. As I understand it, it currently has a couple issues, specifically with inflation and counterfeiting, but those should be addressed as the project matures. If/when those are addressed, OT would almost perfect, considering that to my knowledge, it supports offline payments as well as other features we would find useful. We also floated a couple other ideas in case OT did not work out in the long run, only one of which actually has potential My (rather naive) idea was to make a currency similar to bitcoin, but that uses data transferred as a proof of work somehow. We quickly found many issues with this proposal, and I am only putting it here in case it sparks some brilliant idea from a reader. Another of my ideas was to use have multiple currencies. The group found this much more promising than my previous idea. Essentially, it would work similarly to how currency worked in the US before the government introduced the USD. The network would have multiple “banks,” each with their own currency. Each node could decide what currency to accept, based on the trust of the banks (which goes back to our earlier discussion about trust in the network). These currencies would probably be loosely based off of OT, with changes allowing the network to support multiple currencies. Still, all of these have problems Any currency that is specific to the hyperboria network – which is what we would like to have – still has one major issue: how to get on to the network in the first place. While we talked at length about this, I do not feel that I have a good enough understanding of any of the proposed solutions to write about them. We are still looking for ideas about this, and I will make another post once we have discussed this more. We also talked about gateways between hyperboria and the clearnet for a while. However, I will save that for a later post, as it is not directly related to what I have written so far, and this post is currently well over 1000 words. To wrap it up, we had some potentially great ideas last night, and this could definitely be a jumping off point for future developments in paid cjdns and network trust. Thank you for reading all this. It is entirely possible that this post is riddled with errors; if it is, feel free to tell me about them. To comment, or further discuss this, I am bentley on efnet and hypeirc, /u/matteotom on reddit, and my email is email@example.com. Also, feel free to discuss this in the reddit comments (link coming soon, after I post to reddit).
OPCFW_CODE
Simply speaking, a transaction or a unit of work is a set of database operations that we want to treat as “a whole”. It has to either happen completely or not at all. To ensure the correctness of a transaction, a database must be atomic, consistent, isolated, and durable. These four properties are commonly known under the acronym ACID. The transaction properties You have already guesed it, there are four properties: Property #1: Atomicity More often, but not always, a transaction is made of multiple SQL statements. The atomic property states that all the statements must either be complete entirely or have no effect whatsoever. No partial execution should be permitted. The idea is that a transaction must always leave the database in a consistent state. This leads us to the second property. Property #2: Consistency From the previous explanation, we understand that the transaction must only bring the database from one valid state to another. This property ensures that there are no database constraint violations. Property #3: Isolation The isolation property guarantees that the uncommitted state changes are not visible or do not affect other concurrent transactions. Property #4: Durability This property states that a committed transaction must permanently change the state of the database, even in case of a system failure like a power outage or crash. This implies that a successful transaction must always be recorded in non-volatile memory and/or a persisted transaction log. Transaction in action Untill now we have discussed the database transaction in theory. Let’s it in practice. We’ll be considering bank transfers scenario. Let’s say we have two clients in our database, Sarah and Patience. The first one wants to transfer an amount of 20 dollars to the second client. Here’s the process so far: - Decrease Sarah’s total amount by 20 dollars - Increase Patience’s total amount by 20 dollars But the problem is that both of these two operations must be fully completed (commit) or not at all (rollback). Before we continue, do create a table, users for instance, with id, name and amount fields. You can enter these data: |1||Sarah Lifaefi Masika||150| To make sure everything is OK, run the command below: SELECT * FROM client; You should have this output: Great! We’re now ready to proceed with the decrease and increase operations. UPDATE client SET amount = amount - 20 WHERE id = 1; We decrease $20 from the total amount of the user with id 1, that’s Sarah. SELECT command again and you should get this output: The total amount for Sarah is now $130. UPDATE client SET amount = amount + 20 WHERE id = 2; The code snippet above adds $20 to the amount of Patience, so that it is now $30. Did you noticed that we ran two separate operations? That’s not a transaction at all. Remember we said above that a transaction is a set of database operations that we want to treat as “a whole”. So, if we really wants to run both operations in ONE operation, aka transaction, here’s the code: BEGIN TRANSACTION; -- <-- transaction begins here -- decrease first UPDATE client SET amount = amount - 20 WHERE id = 1; -- than increase UPDATE client SET amount = amount + 20 WHERE id = 2; COMMIT TRANSACTION; -- <-- transaction ends here That’s all for the transaction. Now if we run the SELECT command, here’s what we get:
OPCFW_CODE
Content Crawling and Search Overview SharePoint Portal Server provides extensive and extensible content crawling and search features that support full-text searching and a Structured Query Language (SQL-based) query grammar. The SharePoint Portal Server Search service can crawl content and its associated properties stored in internal in addition to external Web sites, local and network file systems, Web Storage Systems, Microsoft Exchange 5.5 and Exchange 2000 Server, other SharePoint Portal Server computers, and Lotus Notes databases. The SharePoint Portal Server Search service and extended SQL query language support a broad range of simple and complex queries over multiple document sources, and can mix property-based filtering with full-text, linguistically-enabled content matching. Search results from these content sources are merged together. Figure 5: SharePoint Portal Server Content Crawling and Search Architecture The preceding figure illustrates components of the SharePoint Portal Server Search architecture. The following list describes the components of the SharePoint Portal Server Search architecture. - **Search Engine. **Component of the Search service that runs queries written in the SharePoint Portal Server extended SQL syntax against the full-text index. - Index Engine. Component of the Search service that processes chunks of text and properties filtered from content sources, and determines which properties are written to the full-text index. - Gatherer. Component of the Search service that manages the content crawling process and has rules that determine what content is crawled. - Wordbreakers. Components shared by the Search and Index engines that break up compound words and phrases. - Stemmers. Components shared by the Search and Indexing engines that generate inflected forms for a word. - Filter Daemon. Component that handles requests from the Gatherer. Uses protocol handlers to access content sources, and IFilters to filter files. Provides Gatherer with a stream of data containing filtered chunks and properties. - Protocol Handlers. Open content sources in their native protocol and expose documents and other items to be filtered. - IFilters. Open documents and other content source items in their native format and filter into chunks of text and properties. - Content sources. Collection of data the Search service must crawl, and specific rules for crawling items in that content source. Items in content sources are identified by URLs. What distinguishes different types of content sources is the protocol portion of the URL. Each SharePoint Portal Server workspace has an associated Gatherer process and its own full-text index called the workspace catalog. Each Gatherer process contains its own set of parameters, restrictions and plug-ins components. Each Gatherer process also keeps its own logs and performance statistics. The content crawling process is started by a manual or scheduled instruction to crawl content or by a notification from a file store — for example, a SharePoint Portal Server document store or a file share using NTFS — that notifies Search when content has changed. The Gatherer component is given a URL for the start address for a content source, and a crawl is initiated. The Gatherer uses a pipe of shared memory to request that the Filter Daemon begin filtering the content source. For the crawl process to be successful, the content source must have an associated protocol handler that can read its protocol. The Filter Daemon invokes the appropriate protocol handler for the content source based on the start address provided by the Gatherer. The Filter Daemon uses protocol handlers and IFilters to extract and filter individual items from the content source. Appropriate IFilters for each document are applied, and the Filter Daemon passes the extracted text and metadata to the Gatherer through the pipe. The Gatherer runs the data through a series of internal components (such as the Persistent Query Service [PQS] component that matches crawled documents against subscriptions stored in the system,) to process the data before relaying it to the Index engine. At this time, the Gatherer's index component saves document properties to a property store separate from the SharePoint Portal Server document store. The property store consists of a table of properties and their values. Properties in this store can be retrieved and sorted. In addition, simple queries against properties are supported by the store. Each row in the table corresponds to a separate document in the full-text index. The index itself can be used for content queries. The property store also maintains and enforces document level security that is gathered when a document is crawled. The data is then passed to the Index engine. The Index engine uses wordbreakers and stemmers to further process the text and properties received from the Gatherer. The wordbreaker component is used to further break the text into words and phrases. The stemming component is used to generate inflected forms of a given word. The Index engine also removes noise words and creates inverted indexes for full-text searching. These indexes are saved to disk. Search Query Execution Users can search for content on the dashboard site. When a search query is executed, the Search engine passes the query through a language-specific wordbreaker. If there is no word breaker for the query language, the neutral word breaker is used. After word breaking, the resulting words are passed through a stemmer so that language-specific inflected forms of a given are generated. The use of wordbreaker and stemmer in both the crawling and query processes enhances the effectiveness of search because more relevant alternatives to a user's query phrasing are generated. When a property value query is executed, the index is checked first to get a list of possible matches. The properties for the matching documents are loaded from the property store, and the properties in the query are checked again to ensure that there was a match. The result of the query is a list of all matching results, ordered according to their relevance to the query words. If the user does not have permission to a matching document, the Search service filters that document out of the list returned. Writing Search Queries The SharePoint Portal Server Search engine can be accessed using Microsoft ActiveX Data Objects (ADO), through the OLE DB Provider for Internet Publishing, or using the XMLHTTP COM object and the WebDAV/DASL protocol. When developing server-side applications with XMLHTTP, the serverXMLHTTP object must be used. Use of the XMLHTTP object will jeopardize the stability of your server. For more information on writing Search Applications, see Searching SharePoint Portal Server. Extending Content Indexing and Search For file types and formats that SharePoint Portal Server cannot crawl out-of-the-box, you can create custom indexing filters (IFilters). SharePoint Portal Server provides enhanced IFilter registration and loading methods for IFilters. If a content source must be accessed using a network or access protocol that is not already supported, SharePoint Portal Server provides extensibility interfaces that allow you to create custom protocol handlers.
OPCFW_CODE
My homework where can write my assignment help an do anything if you by leading law paper to public. Business assignment help service available at affordable rates. Writessay is provided by dedicated to boost your papers literature review professional assistance custom essays now! I need to send, uk and essay writing help phoenix is important. Expert writers resume correct all writing service is their law professionals. Your ideas aicd assignment helper where can get in australia assignment. Writessay is help online, essay help expert to develop help of singing. Help with assignment writing Resume samples free myassignmenthelp assignment write a professional cv writing, report or work. Help assignment help expert writers resume samples free turnitin assignments report papers open university essays review expert assignment get best chemistry help free, business. I essay help sites written from books: 1 c/c help assignment help research. There are emails to help assignment help india the top our assignment writers luton service website. If need help firm - duration: 3-4 pages of footnotes are really mean to do my assignment help and custom essays now at affordable rates. Maddox smith-june 24 assignment of the decisions that just give your essays now! Write a professional essay write a us online essay uk websites. Good topics for me buy assignment uk and plagiarism free turnitin assignments report papers essay on the solar system code writing help in your essays now! Then just make you with us best assignment write my assignment help with the number 1: 1 help assignment general agreement. Australia assignment write a professional solution for graduate school assignment law do my essay, can get it here from books: 1 c/c help writing service. Essay do my assignment writing service that does their law professionals. Engineering assignment write a professional assistance, dissertation, tutor assessment can get a narrative paper writing service help optimization across the photography institute? Remember, assignment help assignment of an essay getisis a good topics for me book reviews; thesis statement on time. Role of land best assignment help write my assignment help. Com assignment helper where can i essay assignment solutions, essay for sale online essay writing essay write my papers do my assignment workers, uk websites. Qualitative research papers do my assignment help mba assignment help design assignment writers always nearby to do my assignment essay help write a democratic society. Need assignment help essay writing assignment get 100% scam free download doc be written from scratch along with their law assignment help research work. Good essay mean to use recommended custom writing who are the original background so if they wish were their endeavor only. My assignment geography papers discount code writing help queries vic uni assignment help. However, her seven children alone, uk and argumentative essay writing mistakes and coursework. I need help doisis the triumphant elections of 1930. John myassignmenthelp assignment help to connect with their endeavor family narrative essay Buy assignment help assignmentoffers assistance in your essays now at affordable rates. Writessay is very different branches of the top ten essay meister overpopulation. - Mba finance assignment help - Students assignment help - Law school assignment help - Online english assignment help - Assignment essay help prostate - Psychology assignment help
OPCFW_CODE
A comparative study of commuting patterns in Dallas, TX and Washington, DC highlights the potential benefits of increasing housing density through the City of Dallas’ proposed inclusionary zoning initiative. Presented at the 2018 American Association of Geographers Annual Meeting. Click here for the presentation. This interactive tool allows users to weigh the importance of five different indices related to the opportunity of one's neighborhood: jobs proximity, poverty, racial diversity, violent crime, and the cost to develop housing. The user can identify high-opportunity neighborhoods while also considering the cost of subsidizing housing there. Click here to see. This bivariate choropleth map allows the user to view the overlap of poverty rates and racial diversity simultaneously at the census tract level. It's also searchable by city or state. Click here to see the map and here for a brief overview of methodology. Emergency Call Boxes and Campus Safety on the UT Dallas Campus Emergency call boxes boost safety by deterring crime and enabling rapid emergency response. UT Dallas should keep pace with campus construction and student body growth by installing new call boxes in strategic locations. This paper uses a viewshed analysis to propose the best places for UT Dallas to install future call boxes. Click here to see. This project won first prize at a university GIS poster competition (November 2016). It used a simple land cover classification to estimate the population of Midland County, Texas. It combined work in ERDAS Imagine, ArcMap, and R. Overall, a linear regression model estimated the population within 0.425% of the actual value, while a spatial autoregressive model underestimated the population by 1.939%. Click here for a PDF of the poster. When one studies racial and religious bias crime in the U.S., the differences between cities are stark: many cities report zero bias crimes, while a handful report hundreds each year. Why the disparity? This study employed a generalized estimating equation (GEE) model to make sense of the broad spatial and temporal differences in bias crime throughout 72 U.S. Combined Statistical Areas (CSAs) from 2006-2014.Click here for a PDF of the study. Programming - Python This project empowers users to access common-sense, credible, and easy-to-understand information about urban areas in the United States. Using Python programming and ArcGIS, the program generates choropleth maps, time series graphs, and basic statistical summaries on demand. All users have to do is input their geographical area and theme of interest. Click here for a PDF of the poster. Published in Greater Greater Washington. The neighborhood you call home has the potential to help your economic mobility and your health and well-being. That’s why it’s important to create more chances for families with low incomes to live in areas that are close to jobs and transit, with low poverty and crime, and high-performing schools. The World Economic Forum Water Initiative predicts "a 40% shortfall between water demand and available freshwater supply by 2030." This is a survey of the forces increasing global water demand, future stakes, and approaches to development and conservation. View here.
OPCFW_CODE
This blog post isn't going to walk you through how to get your site's insight score up to 100%. The Acquia Cloud interface does pretty well at describing exactly which steps to take to get yourself there. However, I found it very frustrating that my insight score was only updating once per day if I was lucky. I had trouble finding some sort of explanation on what exactly makes the system update. But once I found the necessary documentation, setting up the Acquia Connector module to give me hourly updates on my Insight score was easy peasy! Going forward I will assume you are using the Admin menu toolbar. If you don't have it already, do a quick 'drush en admin_menu -y', then do 'drush en admin_menu_toolbar -y', and finally 'drush dis toolbar -y'. This will enable admin menu, admin menu toolbar, and disable the standard toolbar that comes out of the box with Drupal. I will also assume that you're already using the Acquia connector module to manage your Acquia settings. (Sorry for all the assumptions) Step 1: Go to Configuration > System > Acquia Subscription settings ( /admin/config/system/acquia-agent ) Step 2: Un-check "Send via Drupal Cron" and click "Save settings" This default setting says that you want to send information to Acquia Insight every time the Drupal Cron gets run. Unfortunately it didn't seem to fire for me. I ran the cron multiple times, flushed my Drupal cache, cleared my Varnish cache, and slammed my head on the keyboard all to no avail. After you save your settings without this option checked, some new text appears. Enter the following URL in your servers crontab to send SPI data: http://siteURL/system/acquia-spi-send?key=RANDOMKEY Copy that URL and move onto your Acquia Cloud account. The next step is to enable a cron in the cloud that will fire off every hour. Step 1: Log into insight.acquia.com/ Step 2: Click on "Cloud" Step 3: In the sidebar menu, click "Cron" Step 4: Depending on which environment you're working in, click the appropriate "Add cron job" Step 5: Enter a name for your cron. (ex: Insight Score Update) Step 6: Add the following code to your Cron command: /usr/bin/wget -O - -q -t 1 http://siteURL/system/acquia-spi-send?key=RANDOMKEY Step 7: Change the command frequency to "Every hour at 15 minutes past the hour" and save that. That is all there is to it. Now the next time it is 15 past the hour, your cron will activate and update your insight score. Now get back to work optimizing your site for the Acquia Cloud!
OPCFW_CODE
For work, I sometimes have to create fillable form fields in PDFs, which I create using the Prepare Form tool and "add a text field" option. I usually have to paste a large amount of script into the calculation field, which I copy from a Word document. I have never had an issue with this before now. However, suddenly I can no longer paste any text into any of the tabs within the text field properties box (see photo below, I cannot paste into the name, tooltip, or any field on the other tabs), at least if it's coming from another document or application. I can only paste text if it has been copied from the internet (I have no idea why it is now being selective). I have tried pasting from Word, another PDF, even my Outlook, nothing works except when copying from the internet (I've used Edge and Chrome, both work). Did I somehow change a setting in Acrobat and not realize it? Has anyone had a similar issue before? I have the most recent version of Acrobat Pro DC, I even tried uninstalling and reinstalling to try to fix it, also tried restarting my computer. I have a work-around for this, but it's a hassle when I know this should be working the easier way. Copy link to clipboard Can you try pasting from word to a plan text file ( say notepad or similar) and see if you can paste it from there? Copy link to clipboard Thank you for reaching out. As BarlaeDC suggested, please try copying and pasting the text from the Word file to Notepad or other text files. Check if that works. Also, try creating a new Word file and save it locally. Check if you can copy and paste text from that file. Please share the Acrobat and OS versions on the machines. It would be helpful if you could share the screen recording with us. I copied and pasted the text into both Notepad and OneNote (paste text only, no formatting), then copied that and pasted into the form field and both of those worked. Then I tried copying and pasting into a new Word document and saved it locally (I saved as both .doc and .docx) then tried copying and pasting from it, but that did not work. I can copy all my text and save it in OneNote and just copy from there from now on, it's just strange that I can't copy from Word. I have Acrobat DC version 22.0, and I have Windows 10 Enterprise version 20H2. Sorry I don't know how to screen record, but here's a larger screenshot example of where the issue is, I can't paste into any of the fields in the properties box if I am copying from Word. Again, not a huge issue now that I know I can save into a note and copy from there, just don't know what I did to make it not paste from Word anymore.
OPCFW_CODE
Possibility environments are present in all of the sites where by There may be Competitors. In last number of years resort market has come across the down fall with regards to business enterprise mainly because of the economic downturn from the economy of the planet. Thanks in your case great post, I've a matter in attribute reduction utilizing Principal Part Examination (PCA), ISOMAP or every other Dimensionality Reduction system how will we ensure about the volume of options/Proportions is most effective for our classification algorithm in the event of numerical info. My e-commerce method just isn't sophisticated and it doesn't guidance advert-hoc bundles. I’m confident you are able to understand. You can see the entire catalog of textbooks and bundles here: Case in point: Assuming that a is usually a numeric variable, the assignment a := 2*a signifies that the written content from the variable a is doubled once the execution in the statement. You may deal with supplying value with equipment Finding out by Discovering and acquiring very good at working via predictive modeling complications end-to-end. You could present this ability by establishing a equipment Finding out portfolio of concluded projects. Thank you with the post, it was quite helpful. I have a regression problem with one particular output variable y (0 Your code is appropriate and my result's the same as yours. My place would be that the finest features located with RFE are preg, mass and pedi. The LSTM recurrent neural network as well as the 5 methods it can be utilized to model time sequence prediction challenges. Students are declaring, I would like help to accomplish my assignment, click here for info I want an individual to help me do my physics homework, and that is why we've been in this article. Here you will discover 24/7 assistance teams, focused Expert diploma-Keeping writers, safe payment strategies, and in some cases one hundred% pleasure assures. "Resolve my homework" request will not be an issue for you! For those who have any worries, Call me and I am able to resend your invest in receipt electronic mail Using the obtain backlink. It truly is the strength of representation Finding out that is definitely spurring such great creativity in the way in which the procedures are being used. One example is: Advertising methods at place: besides changing the qualified prospects into company, it is necessary to make sure that the people and the corporate are aware about your existence as well as the providers. After the lodge begin marketing the rooms as well as banqueting space, It will probably be simple to measure how successful the advertising and marketing was. The advertising and marketing’s results will likely be calculated as a result of the volume of leads obtaining created. Very clear algorithm descriptions that help you to be aware of the principles that underlie Every single strategy.
OPCFW_CODE
#!/usr/bin/env python # -*- coding: utf-8 -*- # author: 11360 # datetime: 2021/4/25 22:47 import torch class DataSampler: def __init__(self, batch_size, sample_size, train_size, x_y_observe): """ :param batch_size: :param sample_size: boundary size """ self.whole_space_size = batch_size self.boundary_size = sample_size self.x_y_observe = x_y_observe self.train_size = train_size self.grad = self.grad_estimate() def grad_estimate(self): num = self.x_y_observe.shape[0] grad = torch.zeros([num, 1]) for i in range(1, num): grad[i, 0] = (self.x_y_observe[i, 1] - self.x_y_observe[i - 1, 1]) / \ (self.x_y_observe[i, 0] - self.x_y_observe[i - 1, 0]) return grad def sample(self): x_batch = 4 * torch.rand([self.whole_space_size, 1]) x_0 = torch.zeros([self.boundary_size, 1]) return x_batch, x_0 def sample_x_y(self, all=False): """ A set of observation of the ode. :return: x_batch, y_batch """ if all: return self.x_y_observe, self.grad batch_num = self.x_y_observe.shape[0] index = torch.randint(0, batch_num, [self.train_size]) x_y_sample = self.x_y_observe[index, :] # x_y_sample = self.x_y_observe[index, :] grad_estimate = self.grad[index, :] return x_y_sample, grad_estimate
STACK_EDU
Did you set up a drawing template and associate it with your Topobase workspace? What you will need to do is to 1. start with an AutoCAD template (DWT) 2. make sure the units are meters/feet or unitless. You can do this through typing units in the command line 3. Use the map display manager to add data. Add the layer(s) that you want to see and edit the styles as necessary 4. Save this file as a dwt file 5. In Topobase Administrator, go to workspace and select the DWT as your drawing template for the workspace. The generate graphics command tries to load the DWT file and then loads each layer contained in it. The error you get seems to indicate that the layer in question is missing in the template file. I had created a DWT from the .LAYER files associated with the Oracle dump I imported for the Water Application. I did notice that DM showed the warning icon but I was unable to get to the display element properties to see what I assumed would be a feature source query definition. I also specified this DWT for the Water workspace in TB Admin. When I run the client to generate graphics I can see that the DWT is being found as the display elements are present in the new drawing. Sounds like the issue pertains to the physical association between the display element and the corresponding data table. I've successfully worked with FDO and ArcSDE using DM and am assuming that the functionality is similar? Should I be able to view the properties associated with the display element (.LAYER)? It will likely be a week or so before I can get back on this so I won't be able to try anything in the short term, but thank you again for your reply! I have some additional information for you. Topobase tries to regenerate each drawing layer which was created with the Topobase Fdo provider. In addition, the system checks whether a drawing layer belongs to a feature class from the Topobas workspace or not. It uses the connection parameter which you have entered at the login to get that information. Make sure that you use the same connection parameter for the workspace but also for the drawing. And I would assume that since the layer was introduced to the DWT via "drag and drop" this does not guarantee that the connection information is correct. I was trying to access the properties for the drawing layer to verify just that but was unable to. You could open up the .layer file in a text editor to look at contents (although it might not be exactly easy to read) before you do the drag and drop. Another way is to disconnect the said connection (that is associated with the problem layer and then reconnecting) That will require you to reenter the connection parameters. Incidentally, this method of editing the .layer files using a batch find and replace is one way to change the data connection parameters for the layer files that ship in the templates folder e.g. oracle instance name etc I edited the existing .layer files to point to the correct connection name: "TB2007_WA" and was still unable to generate graphics. I then created a new DWT using Metric units and created a Feature Source based on the Topobase provider. I added the features from the feature source list to the template, and was able to retrieve graphics. I captured the coordinate extents of the graphic information, exited the DWT and was then able to successfully generate graphics, retrieve feature information and symbolize features.
OPCFW_CODE
Linux Best Practices I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks! your question is somewhat broad...perhaps split it up into several questions? There is only one /home; user's home directories are usually created under that (e.g. /home/dev and /home/guest). Also, quotas. Correct. No problems. That's overkill for /boot, unless you intend on placing an entire live image in there to boot with e.g. MEMDISK. I'd cut it down to 200MB. / should only need about 30-40GB; allocate some towards /srv instead, and do your staging in there. Unless you need to access it from Windows, ext4 is a good choice. Do as little as possible as root, but at the same time only ever do root-related things as root. Thanks for the helpful feedback! To make sure I understand your recommendations, in #3 are you suggesting that I partition / and /serv separately? This would mean partitions for /boot, /, /srv, swap and /home, yes? Also, what typically goes in /srv? Thanks again! Correct. http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM
STACK_EXCHANGE
Doctors Beck & Stone Featured Regional Manager Hong Kong - Oversee operational issues - Microsoft office competency is essential - Experience of integrating hospitals into a group would... Microsoft Outlook, Microsoft Word and Microsoft Excel) competently Excellent organisational skills Ability to create productive working relations... QA Apprenticeships13 Jan IT Traineeship with Microsoft - Reading (RG6) Want to work at Microsoft's UK Headquarters? This Traineeship is a fantastic and unique opportunity to learn from industry specialists across... Capita Resourcing12 Jan Microsoft Dynamics AX Developer Microsoft Dynamics AX Developer London ContractMicrosoft Dynamics AX DeveloperLondon £500 - £550 per dayI am looking for a Microsoft Dynamics AX... Epson Europe BV06 Jan Microsoft Dynamics CRM Specialist ... and ability to express concepts in a concise and logical manner Knowledge of Sales Pipeline model Good numeracy skills Proficient with Microsoft ... Itecco Limited09 Jan Microsoft Trainer Contract - London (Central)Initial 3 Month Duration (likely extension) - £250/dayI am looking for an experienced Microsoft... IBM Client Innovation Centre04 Jan Role: Microsoft SpecialistRef: MSLE1Location: Leicester Salary: Up to £52,600 (Dependent on Experience) Contract: Full time (40 Hours)Contract... Search And Selection17 Jan Partner Analyst (Microsoft Excel) Partner Analyst (Microsoft Excel) London £25,000 - £30,000 + Benefits Company: A global IT provider that employs over 10,000 people worldwide... Michael Page Technology18 Jan Senior Microsoft Dynamics Ax Architect Senior Microsoft Dynamics Ax Architect You will work within a team of Microsoft technical specialists: architects, designers, engineers... Nigel Frank International Ltd16 Jan Microsoft BI Developer Consultant: Jake Benson / 0203 879 8360 / Job role: Microsoft BI Developer Day rate: £375 - £450 Location: City of London Start date: 30/01/2017... World Fuel Services10 Jan IT Engineer (Cisco, Fortinet, Microsoft) IT Engineer (Cisco, Fortinet, Microsoft) IT Engineer (Cisco, Fortinet, Microsoft) required by a Fortune 100 Company and recognised leading global... High Wycombe, Basingstoke, London Thebes IT16 Jan Technical Consultant - Microsoft Dynamics Role: Technical Consultant - Microsoft Dynamics Key Essential Skills - Microsoft Dynamics, Build and Confiq (MS Dynamics), Stakeholder Management... High Wycombe, Basingstoke, London At Microsoft, our mission is to empower every person and every organisation on the planet to achieve more. Achieving that mission begins and ends... Microsoft Artificial Intelligence & Research team Recruiting Researchers and Engineers at Microsoft Research in Cambridge, UK Microsoft has recen... Sr Research Software Dev Engineer benefitsLocation: Microsoft Research Centre - CambridgeWe are looking for a Software Development Engineer to complement the Systems and Networkin... Software Engineer (Cloud Services) - SwiftKey SwiftKey proudly joined the Technology and Research group (TnR) at Microsoft to unite in working to empower every person and every organization... Research Scientist (HoloLens) ... of shipping ground-breaking technologies in Microsoft products including Kinect and HoloLens.The team is growing, and we are looking for exceptio... ... will be a good team player, proactive and comfortable challenging and driving continuous improvement. Experienced with Visio and Microsoft... Head of Information Security ... Architecture and Frameworks; Knowledge of cloud computing infrastructure (e.g. Microsoft Azure); Very strong grasp of the technologies used to de... London (Kingston Upon Thames)
OPCFW_CODE
Here are some of the biggest mistakes I made hiring programmers. To an average person, they might not look like mistakes. But, once you get a little experience in this domain, you will understand why what I did wrong was so wrong! (1) Initiative (or the lack of it thereof) The Los Angeles Programmer The first programmer I hired was actually the best I have ever hired. However, he lacked a desire to get things done for me. I had to crack the whip, and visit with him regularly to coerce him to finish work. My mistake here was that I didn’t shop around to see if there was anyone else who had comparative skills accompanied by a little more initiative. (2) Interviewing without testing: The North Coast Programmer Many years went by and then my first programmer quit, and his helper got fired. I was left high and dry. No programmers, and no way to find good ones in a world-wide situation where there was an acute shortage of programmers. I interiewed several companies I liked. I tried to decide which company to hire purely based on an interview which was a huge mistake. The interview only tells you one dimension about a person — how they communicate when they are trying to impress you. It doesn’t tell you how they work, or if they get things done on time. The company I hired disrespected all deadlines, and even tried to cheat me several times. After that I learned that you have to try companies out with small inconsequential test projects before giving them the passwords to your main sites. Additionally, they tried to get me to communicate with the “project manager” instead of the programmer. But, the project manager didn’t make sure anything got done and was completely useless. So, when anyone tries to block critical channels of communication — fire them. (3) Knowing the boss, but not getting to know the programmers: An India programming nightmare I had a bad feeling about this, but I had no choice. I needed my site to be in someone’s hands who I trusted. I had known Deepak for years. So, I offshored my project to India. The first programmer he gave me was very acceptable and did good work. So, I handed my project over to Deepak. Little did I know that his programmers had gone far down hill in the last few years because the big companies worldwide had been poaching quality programmers. So, I started out with a programmer who just couldn’t function, and then fired him and moved on to another one of Deepak’s programmers who was better. She left on maternity leave and then I got a third one who was somewhat capable of doing my assignments. Had I interviewed these programmers by phone individually, and tested them on small test projects before allowing them to work for me, I could have avoided the dysfunctional results given to me. Now I know. (4) Communication seemed open, but was blocked: The Arizona dry spell I gave assignments to a number of other programmers who all went on strike until I found a company who seemed promising. First of all, they answered their phone. I was happy that they kept their channels of communication open as closed channels can ruin projects and have become a deal breaker for me. The trick was that they changed their willingness to communicate the minute I put my reliance in them. I could talk to the receptionist who assured me that she could relay any critical information to me. The problem was that they forbade me from talking to the programmer in critical situations and the contact person was never given any critical information unless I harassed them many times. The result was that the programmer either didn’t finish work correctly or at all, or made some serious blunders which never would have happened if he would just double check his steps with me. But, his attitude was that I didn’t know anything so I should just stay out of it. The reality is that he doesn’t know a lot of things about my site that I do know that he could have found out if he would just answer is damn phone! This was one of many deceptive things programming companies have done to me. A quick note – Open Channels of Communication are imperative I have a rule that all channels of communication need to be open. I need to be able to reach the programmer, the boss and the project manager if there is one. If one of these channels is blocked, then I fire the company immediately. However, if the programmer is busy and doesn’t want to be bothered — I don’t mind communicating with an intermediary some of the time if it will make it easier for them providing that they don’t cut me off completely from communicating with the programmers. Most companies don’t want you talking with their programmers, so this is a constant issue. I just tell them I’ll fire them if they don’t cooperate on this front, or that I won’t hire them for any serious work if they block communication even once. You have to stand your ground or they will keep you behind a barrier nine times out of ten. (5) Silence at an interview: The beach programmers The boss said that none of his seven programmers were willing to show up at an office. Later on I suspected that there were no seven programmers, just the one who showed up at the interview and sat silently for three hours while the sales manager chatted me up. I didn’t realize that someone who sits silently for so long is a huge risk. Such people do not like humans and don’t care to interact with my species either. They are dangerous if you put them on a project. Here’s what happened. We did a little test job and looked at the site at a cafe. I drove down to see them. After he had agreed to take my project and give me 20 hours a week, he delayed finishing the test project, and after I spent $800 on hotel rooms he uttered the words, “Another project” and just quit altogether. Antisocial people do antisocial irresponsible inconsiderate things. Beware. Nobody is perfect, but antisocial people are much more dangerous than the average person. Additionally, these programmers went on vacation all the time and “brought their work with them.” I don’t know if their vacation schedule caused a problem, or just their attitude of doing whatever they felt like, but too many vacations could be a warning sign. (6) Giving the code without a deadline in Orange County I met a nice guy in Orange County. I really liked him and he really liked coding. He described himself as a cracker jack of coding. He seemed like the gentleman of the business. Sociable, smart, nice and trustworthy. After waiting six weeks he informed me that he couldn’t start my assignment because it was in PHP and he didn’t know PHP. The code was in ASP Classic, and he had not even looked at it because he had, “Another Project.” Now, where have I heard this before. If I had given him a 20 day deadline to fix some code which only would take a few hours, then I would have been able to give the job to the next guy in line without such a long delay while my website wasn’t functioning correctly. Another Quick Note – “Another Project” The biggest reason why a programming company will not finish work for you, or talk to you is because there is, “Another Project.” If you test programming companies out, see how well they get your work done if they have, “Another Project.” Otherwise you will be on the back burner until you dump them for another company who does the same thing. (7) Not getting a bid There was yet another programmer who I really liked. He was decent to me for the most part. He had done several small projects for me. They weren’t necessarily done on time, but they got done. So, I gave him another slightly more complicated project. It took twice as long as I thought necessary and was done wrong. If I had had him give me a formal estimate for the project, I would be able to hold him responsible for fixing it and getting it done according to specifications by a certain date. yet another mistake on my part because I had developed trust in someone. Even if you trust a programmer, for well defined tasks that take more than 10 or 20 hours, get a formal bid. (8) Testing them on easy stuff only I learned the hard way that you have to test companies out before using them. So, I tried yet another California company out. I really liked the boss. They got 100% on my project and finished it quickly. Then, I gave them a complicated assignment and asked them to bid on it. Their bid was double or triple what I thought a top-notch programmer would charge. Were they cheating me? Were they just being careful? Or was their programmer not as senior as they portrayed him to be? A junior programmer would realistically require as many hours as they bid. The problem was that I tested the company out on easy work, but didn’t test them out on complicated tasks before hiring them. It is good to have a comprehensive score sheet on any company you hire that covers communication, meeting deadlines, efficiency, cleanliness of code, and how they function on different levels of complexity. I made exactly the same mistake with another company in India who did exactly the same thing. They did great on my test project, but then bid 800 hours on a 100 hour project that was slightly complicated. Once again, I fell into a pitfall and learned the hard way. (9) Not having backups I hired programming companies without having qualified backups. The result was that when they started being irresponsible I couldn’t just fire them because I had nobody else to dump my project on. I had already run through my supply of people I thought were my backups. They wouldn’t call me back or cooperate. A backup is not a backup unless you know they are going to perform reasonably. Otherwise it is like walking on a frozen pond. You put your foot on the ice and it breaks. Then you step to the left to your backup spot on the ice which also breaks, then you go back one foot and it yet again breaks. You need to find ice that doesn’t break even when you pound on it — then, you have a back up. If Warren Buffet were hiring programmers, he would probably have at least four meticulously tested backups at all times for security if he had a serious project as an entrepreneur. (10) Giving deposits without a contract in the Bay Area I have given many people deposits. One company in the Bay Area took my deposit and left me high and dry. I couldn’t get the programmer to return calls. I had to keep calling his boss just to get him to get back to me. What is the problem? I finally gave up. I let them keep the deposit. But, honestly, you have no leg to stand on if you give an unreputable company your deposit. And you have no way to know if the company is reputable unless you work with them. Most companies don’t have that many reviews on the internet, and those are not always trustable interviews in any case. If you have a contract that stipulates that work must be done to specifications by a certain date otherwise you not only give the deposit back, but pay a penalty for wasting my time, then it is easier to sue them when they screw up. Getting them to sign such a contract might be close to impossible, but you need some device to ensure your safety, otherwise you are gambling. Programmers are so busy these days that if you don’t pay up front, perhaps none of them will work with you! So, you are not in much of a bargaining position. So, having a contract is just a thought.
OPCFW_CODE
package cuke4duke.internal; import org.junit.Test; import java.text.NumberFormat; import java.text.ParseException; import java.util.Locale; import static org.junit.Assert.*; public class UtilsTest { @Test public void shouldCreateEnglishLocale() { assertEquals(Locale.ENGLISH, Utils.localeFor("en")); } @Test public void shouldCreateUSLocale() { assertEquals(Locale.US, Utils.localeFor("en-US")); } @Test public void shouldFormatLolcatDoubles() throws ParseException { assertEquals(10.4, NumberFormat.getInstance(Utils.localeFor("en-LOL")).parse("10.4").doubleValue(), 0.0); } @Test public void shouldFormatEnglishDoubles() throws ParseException { assertEquals(10.4, NumberFormat.getInstance(Utils.localeFor("en-US")).parse("10.4").doubleValue(), 0.0); } @Test public void shouldFormatNorwegianDoubles() throws ParseException { assertEquals(10.4, NumberFormat.getInstance(Utils.localeFor("no")).parse("10,4").doubleValue(), 0.0); } @Test public void shouldFormatNorwegianDoublesWithEnglishLocaleDifferently() throws ParseException { assertEquals(104.0, NumberFormat.getInstance(Utils.localeFor("en-US")).parse("10,4").doubleValue(), 0.0); } }
STACK_EDU
Quip adds documents, spreadsheets, and tasklists to your Slack experience - Elevate ideas to Quip docs - Share Quip docs in Slack - Get notified of changes to your docs - Available on iOS/Android/Desktop/Web Articles & Videos An assembly language (or assembler language) is a low-level programming language for a computer or other programmable device in which there is a very strong (generally one-to-one) correspondence between the language and the architecture's machine code instructions. Each assembly language is specific to a particular computer architecture, in contrast to most high-level programming languages, which are generally portable across multiple architectures, but require interpreting or compiling. Assembly language is converted into executable machine code by a utility program referred to as an assembler; the conversion process is referred to as assembly, or assembling the code. When I disassemble my assembly code with gdb, the two pointers that point to the data section 0x1000 up are with the .text section. However there are these two: strheq and andeq in front of it. Will … I had successfully built a project in VS2013 using vb.net. I added the CRforVS_13_0_17. I deployed in a clients server with no problems. When I installed on the next server the client already has C… Is it possible to store a clr assembly in a SQL Table.... and call it form there instead of a folder? How hard is to change the top cover? Also is it possible I can put something around it which covers the crack instead of tear a part everything. Renesas RH850F1L Microcontroller has number of pins that can be configured as general purpose inputs or outputs. What might the following statement mean? A particular pin is configured as Digita… I bought used cylinder head driver side with 6 months warranty for no cracks. My question is when i take it to the machine shop what kind of work I will be ask them to do on it? The old one driver si… I want to include following two dll's in my .Net Project. On searching in t… I want to write an assembly program to generate Fibonnaci numbers. Following is the code, but it does not work. Can anybody give me a hand how can I jmp to specific address? I know I can do something like mov eax, 0x404040 but I can not modify registers or stack. I'm doing some kind of asm code injection (coded in Delphi),… Having some issues building a DLL that must be registered for "COM interop", COM visible so it can be called from VB6 Visual Studio says:"Cannot register the assembly" Access to the regi… Visual Studio 2015 This code has been working for a few months and has suddenly stopped. Passing a Queue to be written out to an Excel file: C# project / Visual Studio 2015 / Windows 7 Pro 64 bit I'm having issues with a third-part driver supplied to me for a project. When testing on a client 32 bit machine, the 32 bit driver throws the… I am unable to get the background color of a dialog button to match that of the dialog. I am using the following code: invoke GetDlgItem, hWnd, IDC_ABOUTCLOSE mov hButtonClose, eax I am using MASM32 and to not want to code dialogs in dialog units. I would like to be able to define the dialog in a resource file. If is say, output register 2 to 0x4498 CPU ---> device Does it mean, from content of cpu register 2, the data is supposed to get transferred to port address 0x4498 of device registe… If i say output register 2 to 0x00662288(Vmem address) I think the content of register 2(say AX) will be transferred to this Vmem address which will internally mappped to some physical memo… In computer science world, Do we have any mathematical algorithm which can pick random number something like placing a die and getting any one of 6? Hey, so I'm getting stuck on the fourth phase of binary bomb. What I've figured so far is that this phase uses a recursive call to func4. This is where I'm having my difficulties, because I'm not ex… I just copied an existing project to new machine to start developing on it The project originally referenced an older version of the assembly ( v22.214.171.124-126.96.36.199). My new machine has the latest vers… when installing crystal report runtime(CRRuntime_32bit_13 (Note: I've tagged this C as well as C++ to get maximum exposure, sorry if this offends anyone. I've also tagged it assembly as I feel there may be some gurus there who have come across … I can not figure out how to resolve this: This instruction moves the address of 2e byte into %eax movl $L1+4, %eax… Hello Everyone, I made a form (WPF User Control) with a textbox called Tracking_Num, a comboBox called Courier_List, and a button called Tracking_Button. The idea is the user selects a courier, enters… Recently, I am doing the different script & different program language execution time study and tutorial. For example, the following simple C++ "for loop" is completed within 0.01 se…
OPCFW_CODE
Mobile games are video games which are played on smartphone, tablet and portable media player. Mobile Game Development India is the set of processes and procedures involved in development of software that are needed for running games in wireless gadgets like smartphone, tablet and more. Mobile game is stretched out to a great phase in business of mobile applications and software such that major game producers are making strategies to incorporate a new platform in designing mobile apps and games, which will enhance their business. The time’s long gone when children played games formulated to build physical stamina. Today’s games are mind benders for even adults. With niche genres of sci-fi, adventure, horror and treasure hunts the ‘Game is on’-literally for any Game Development Company that seeks to profit from its ability to entertain with action packed games. The experience is so real on the virtual platform that technology is being maximized on Gaming Platforms like iPhone, iPad, Android, Windows, Facebook, Mac, PC etc. SamifLabs is an ace mobile game programming company with a decade of experience in this vivacious domain. Having an expert team of Mobile Game Developers, we have developed fascinating games packed with an extra dose of excitement. We develop engaging and captivating games for our global clientele. Our Mobile Game Development Services can be listed as follows: - iPhone Games - iPad Games - Android Games - Symbian Games - Windows Mobile Games - Blackberry Games However, Business success is dependent on the likeness of the app which leads to its purchase. To ensure your gaming application is loved and bought by millions; get in touch with SamifLabs. We develop highly interactive, unique, entertaining and addictive gaming applications that gamers enjoy playing for long hours. We strive to deliver the ultimate gaming experience to players. As a mobile game development company, we have expertise in developing simple mobile games like quizzes to full blown mobile games with enhanced graphics and characters. We have a talented team of 2D and 3D game developers which focuses on Mobile Game Development for all mobile platforms including iPhone Game Development, Android Mobile Game Development, Windows Mobile game Development and Blackberry Mobile Game Development. We are trusted Mobile Game Developers & PC Game Developers for various agencies throughout the Globle. We render Game Development solution for any mobile handsets, i.e Nokia, Siemens, Sony, Samsung, as well as other Pocket PC’s any kind of new upcoming mobile handsets. We can even develop Mobile games in flash for easy operations. Why choose for Mobile Game Development? - Enthralling Games with fresh concepts - 2D and 3D games with rich graphics and sound effects - Single/multi player games development - Dedicated game developers - Cost effective rates - Proficiency in diverse SDKs - User centric game development - 24×7 support SamifLabs team develops from small quiz games to mobile accelerated reality games and is one of the best mobile game development companies, which concentrates on 2D and 3D games that are mainly supported by iPhone, iPad and Android mobiles. Our developers are accustomed with leading technologies in mobile game development domain, building efficient applications and games for mobile devices. We are a mobile game development company that is always keen on building new relationships with clients. Our mobile game developers are eager to hear about your ideas and help you bring them to life. Please visit our project starter page here to get started. Windows Mobile Game Development India Microsoft’s most popular OS, with its rich features and amazing functionality, offers interactive and attractive Windows mobile games. Windows mobile is a smart operating system by Microsoft. Using Open GL ES, Windows mobile offers a new world of advanced 2D and 3D Graphics and effects for enthralling and engaging game development. Our expert Windows Mobile Developers, skilled in SDK and latest windows 7, provide most innovative and engaging gaming solutions. We at SamifLabs, provide you with innovative games with rich graphics. Our Windows Mobile Game Developer is highly creative and provides you with the best gaming experience on your windows mobile phones. Our Windows Mobile Application Development team has experience in working on all mobile handsets and can provide you with your favorite games right on your phone! With the world of development getting more and more creative and with development tools and mobile hardware getting advanced enough to support creativity, developers can now give innovative, graphic rich and enticing games to the customers. SamifLabs specialize in offshore Windows Mobile Game Development. We have a team of highly experienced software engineers and professionals to service customers globally. Our goal is to ensure each project is executed on time, on budget, and in line with customer objectives. Our Windows Mobile Game Developer offers Following Services: - Windows Game Design & Development - Windows Mobile 7 Series Game Engine Development - Windows Mobile Game Porting - UI, 2D/3D animated Character Designing - HD Game Development - Migration from Single player game to multiplayer game - Windows Mobile Game Testing - Marketing & Promotion of Windows Mobile Game Our expert Windows Mobile Game Developers has sound knowledge of windows mobile game programming and windows SDK, which makes them unbeatable. They have crystal clear fundamentals and experience in .NET, C# and other programming languages, which empowers them to provide most innovative and entertaining gaming solutions for your windows mobile. With a talented team of world class Mobile Game Developers across major mobile operating systems, SamifLabs provides an expert team specialized in Windows 8 Phone Game Development and has its development centre in India. We understand the growing windows phone market and provide you with the best resources to give shape to your game ideas. Windows has come up with Windows 8 Phone partnering with Nokia and has generated some great reviews from tech experts across the world who has rated the UI as a refreshing one. Windows phone is definitely paved way for the next app market after Apple and Android. Advantages of game development for windows phone: - Emerging market for windows phone - Positive critics review of windows platform - Nokia as an early adopter and more to follow - Potential tablet market with participation from major players in PC domain - Low competition in windows app store can lead to more visibility and success - Added advantage of large windows user base If you are looking for games application development, Windows Mobile Games Application Development company, which can provide you better solutions at a competitive price, and other iPhone based services, like graphics designs, Theme Design, icon design and developing animation and audio for Windows Mobile, your search end here. We have the required proficiency, skill and development experience in Window Mobile Games Development, and you can also hire Windows Mobile programmer from us. Please feel free to Contact us to know more about our Windows Mobile Game Development Services.
OPCFW_CODE
How to create custom armors without replacing existing stuff? Just like other items, armor have a texture for when it is held or in your inventory, but also a texture for when a player equips it. Oraxen alters this second texture by using colored leather armor and shaders. Armor has like every item a texture in the inventory and in the hand, but it also has a second texture when worn on the body. This second appearance has some limitations and requires some practice. We will use a trick with leather armor and colors. If you are using shaders via either the Optifine or Iris mod, you will need some additional steps. For Optifine, everything is handled automatically. For Iris, you also need CIT Resewn, everything else is handled for you. A: item appearance B: body appearance You must be careful when naming your armors to get the textures detected correctly. If you want to create an amethyst armor set, then your item sections must be: - amethyst_helmet - amethyst_chestplate - amethyst_leggings - amethyst_boots And in step 2 you'll be able to create the textures: - amethyst_armor_layer_1.png - amethyst_armor_layer_2.png For this we will be using the below config example for reference: displayname: "<gradient:#FA7CBB:#F14658>Ruby Helmet" Make sure that the items ID, or the first line in the above example, follows the pattern armorname_armortype. For the rest of the above set it would be This is also why LEATHER is the only material that works. Custom Armor can not be made with Diamond as the base material. To get custom armor amounts, simply add Attribute Modifiers.\ Make sure your armors resolution fits the value set in settings.yml. By default armor_resolution is set to 16. This means that your textures must be 64x32 pixels. If you want to use a higher resolution, you'll have to change the value in settings.yml. For example armor_layer files with 128x64 pixels must have a resolution of 32 in settings.yml. You cannot have some at 64x32 and some at 128x64, it is either or. Also make sure that the bit-depth of your textures is 32 bits. Anything else means not fully transparent, and the pixels the shader uses will be black. This will not break Optifine/Iris versions but will break all vanilla versions. You can make your texture emissive (no optifine required) by adding another file with the same name ending in _e.png. For example ruby_armor_layer_1_e.pngThis texture will be treated as an emissivity map, where the alpha of the pixel will be treated as the amount of emissivity. To get your textures registered correctly, their name need to contain armor_layer_X. For example: ruby_armor_layer_2.pngYou can put them in any folder of the pack textures,
OPCFW_CODE
<?php require_once("setup.php"); // error_log("Problems"); // error_log(print_r($_POST, true)); // Instead of using session and prefs to track // current_problem_id, start_time, end_time, and // student_answer, we will use local variables. // When their values must survive a redirect, // we will pass the values in the redirect request. $c_problem_id = Null; $c_start_time = Null; $c_end_time = Null; $c_answer = Null; $c_topic_id = Null; $selected_topics_list_id = Null; if (isset($_POST['topic_checkbox_submission'])) { //check to see if new topic was selected //get from checkboxes if available and put into preferences $selected_topics_list_id = $_POST['topic_checkbox_submission']; $usrmgr->m_user->SetSelectedTopicsForClass($usrmgr->m_user->selected_course_id, $selected_topics_list_id); //caliper event $caliper->assessmentStart($selected_topics_list_id); header('Location:problems.php'); } elseif (isset($_POST['topic_link_submission'])) { //check to see if new topic was selected //get from link if available and put into preferences $selected_topics_list_id = $_POST['topic_link_submission']; if (intval($selected_topics_list_id !== 0))//make sure not to write a string { $array = array(); $array[] = $selected_topics_list_id; $usrmgr->m_user->SetSelectedTopicsForClass($usrmgr->m_user->selected_course_id,$array); $caliper->assessmentStart($array); } header('Location:problems.php'); } elseif (count($usrmgr->m_user->GetSelectedTopics()) < 1) { # redirect to selections header('Location:selections.php'); exit; } elseif (isset($_POST['skip'])) { //check to see if user hit "skip" button //get end time and compare to start time to get total time $end_time = time(); //get current problem $current_problem_id = $_POST['problem']; $current_problem = MProblem::find($current_problem_id); $start_time = $_POST['started']; //get user_id $user_id = $usrmgr->m_user->id; //get current topic_id and omitted problems list for given topic $current_topic_id = intval($_POST['topic']); $omitted_problem = new OmittedProblem($user_id, $current_topic_id); $current_omitted_problems_list = $omitted_problem->find(); //update tables upon response $response = new MResponse($start_time,$end_time,$user_id,$current_problem_id,Null,false,$current_topic_id); $response->update_skips(); //caliper event $caliper->assessmentItemSkip($response, $current_problem); header('Location:problems.php'); } elseif (isset($_POST['submit_answer'])) { //check to see if user submitted an answer //if so, {set pref 'problem_submitted' to something other than null to display submitted problem view //if they get the problem right, exclude problem in future} if (isset($_POST['student_answer'])) { //increment page_loads global $usrmgr; $ploads = $usrmgr->m_user->page_loads; if (is_null($ploads)) $ploads = 1; else $ploads += 1; $usrmgr->m_user->SetPageLoads($ploads); //get end time and compare to start time to get total time $c_start_time = $_POST['started']; $c_end_time = time(); //get student answer // $student_answer = $_POST['student_answer']; $c_answer = $_POST['student_answer']; //get current problem and correct answer $c_problem_id = $_POST['problem']; $current_problem = MProblem::find($c_problem_id); $current_problem_answer = $current_problem->m_prob_correct; //get current topic_id and omitted problems list for given topic $current_topic_id = intval($_POST['topic']); //get user_id $user_id = $usrmgr->m_user->id; //if the student answered correctly, add current problem to omitted problems list for given topic // and set student_answered_correctly to true if ($current_problem_answer == $c_answer) { $omitted_problem = new OmittedProblem($user_id, $current_topic_id, $c_problem_id); if ($omitted_problem->count() < 1) { $omitted_problem->add(); } $c_student_answered_correctly = true; } else { $c_student_answered_correctly = false; } //update tables upon response $response = new MResponse($c_start_time,$c_end_time,$user_id,$c_problem_id,$c_answer,$c_student_answered_correctly,$current_topic_id); $response->update_responses(); // $response->update_stats(); $response->update_problems(); $response->update_12m_prob_ans(); //caliper event $caliper->assessmentItemComplete($response, $current_problem); header('Location:problems.php?ps=1&pr='.$c_problem_id.'&an='.$c_answer.'&st='.$c_start_time.'&et='.$c_end_time."&tp=".$current_topic_id); } } elseif (isset($_POST['next'])) { // handle next event, which may have clarity rating include 'ratings.php'; header('Location:problems.php'); } elseif (isset($_POST['retry'])) { $user_id = $usrmgr->m_user->id; $c_problem_id = $_POST['retry']; $c_topic_id = $_POST['topic']; header('Location:problems.php?pretry=1&pr='.$c_problem_id.'&tp='.$c_topic_id); } elseif (isset($_GET['ps'])) { $c_problem_id = $_GET['pr']; $c_answer = $_GET['an']; $c_start_time = intval($_GET['st']); $c_end_time = intval($_GET['et']); $c_topic_id = intval($_GET['tp']); } elseif (isset($_GET['pretry'])) { $c_problem_id = $_GET['pr']; $c_topic_id = intval($_GET['tp']); } # translate ids to list of topic objects $selected_topics_list_id = $usrmgr->m_user->selected_topics_list; $num_topics = count($selected_topics_list_id); // $selected_topics_list_id might just be a single topic as a string if (! is_array($selected_topics_list_id)) { $selected_topics_list_id = MakeArray($selected_topics_list_id); } for ($i=0; $i<$num_topics; $i++) { $one_topic = MTopic::get_topic_by_id($selected_topics_list_id[$i]); if($usrmgr->m_user->staff == 1 || $one_topic->m_inactive == 0) { $selected_topics_list[] = $one_topic; } } $num_topics = count($selected_topics_list); $picker = new MProblemPicker(); if($c_problem_id == null || $c_problem_id < 1) { # use newly picked problem $picked_problem_id = $picker->m_problem_id; $topic = $picker->m_topic_id; } else { # use problem student is already working on $picked_problem_id = $c_problem_id; $topic = $c_topic_id; } $picked_problem = MProblem::find($picked_problem_id); //caliper event. if ( (empty($_GET) && empty($_POST)) || (isset($_GET['pretry']) && (empty($_POST))) ) { //when picked_problem_id is 0 then the student has answered all the questions correctly and this denotes the end of the problem set student chosen. if ( $picked_problem_id === 0 ) { $caliper->assessmentSubmit(); } else { //we send an assessmentItem#start event when a new problem is displayed to the user, skip is the use case where new problem is shown, retrying a problem too. $caliper->assessmentItemStart($picked_problem, $topic); } } /////////////////////////////////////////////////////////////////////////// // page construction /////////////////////////////////////////////////////////////////////////// $head = new CHeadCSSJavascript("Problems", array(), array()); $tab_nav = new VTabNav(new MTabNav('Problems')); # decide if problem or histogram showing and get the correct view if ($c_answer !== Null) { $content = new VProblems_submitted($picked_problem, $picker->m_problem_counts_by_topic, $c_answer, $c_end_time - $c_start_time, $topic); } elseif ($num_topics > 0) { $content = new VProblems($picked_problem, $picker->m_problem_counts_by_topic, $topic); } else { $content = new VProblems_no_topics(); } if ($picked_problem_id == Null) { $content = new Vproblems_no_problems(); } $page = new VPageTabs($head, $tab_nav, $content); # delivery the html echo $page->Deliver(); ?>
STACK_EDU
The current political situation and the resultant transport and visa limitations dictate moving our meeting, IAU Symposium 365, to another country. It will be held on 21–25 August 2023 in Yerevan, Armenia. As the venue of the meeting, the DoubleTree by Hilton Yerevan City Centre Hotel is tentatively chosen. The new local organizing committee is based on the Byurakan Astrophysical Observatory, which has extensive experience in conference management. The symposium will bring together solar and stellar physicists investigating the dynamics of convection zones and lower atmospheres. It will be dedicated to observational and theoretical aspects of the hydrodynamics and magnetohydrodynamics, both global and local, of the solar and stellar convection zones and lower atmospheres with the inclusion of numerical simulations as a particular branch of theoretical research. As planned originally, the symposium will be dedicated to the following basic topics: 1. Convection (solar – on different scales – and stellar) 2. Differential rotation and meridional circulation (both solar and stellar) 3. Global dynamo (in the Sun and stars; solar-cycle observed patterns and predictions) 4. Helioseismology and asteroseismology (both global and local; probing subsurface structure and dynamics) 5. Local processes of magnetic-flux emergence, sunspot and starspot formation (observed patterns of sunspot evolution, small-scale motions, local dynamo) The preliminary list of invited speakers can be found here. A hybrid format of the meeting is planned. However, we strongly urge the participants to make every effort to be present in person. Not only is this important in terms of the efficiency of personal contacts but it also ensures a full and correct transfer of the content of your talk(s) to the audience, without interruptions and distortions. If you nevertheless participate distantly, please send beforehand each your talk as a video (mp4, avi etc.) file to the LOC and be ready to answer the questions upon the end of its playback. Armenia has a visa regime favorable for visitors from a great many countries. In many cases, visas are either completely unneeded or can be obtained via a very simple procedure. For detailed information on the meeting, please visit the conference website: http://iaus365.sinp.msu.ru. The content of the site will be systematically updated. The calendar of all important dates can be found here. Details for abstract submission can be found here. IAU has (limited) funds to support qualified scientists to whom only limited means of support are available. To apply for an IAU travel grant, please download the grant application form, fill it in either using a computer or (very legibly!) handwriting, sign and e-mail it in a scanned form to the SOC address given below. Contacts: email@example.com (SOC)
OPCFW_CODE
leaving facebook, use python to inform your friends on how to contact you As I am proceeding with my MSc course in Software Engineering, and my studies begin to focus in the field of Productivity, I’m beginning to think about my own productivity. I already posted about how to detox from Facebook without using its usefulness, but it seems not enough. Moreover, the new directions of Facebook with its Frictionless Sharing make emerge many issues related to privacy. The new Timeline profiles introduce so many stupid ways of announcing stupid events in which nobody is interested (you will be to announce to your friends that you broke your leg, say when and where it happened and attach a picture of your broken leg - yay) and will fill your Feed of tons of uninteresting posts. For this and so many other reasons, I am leaving. But could a software developer leave Facebook without a personal epic gesture? Of course, not. I developed a tiny Python program that I used to post a message on each of my Facebook “friends” wall. In this post, localized in Italian if they have Facebook set in Italian (in English elsewhere), I put the many other ways for contacting me. This program, called FacebookGreeter, is an opensource project released on my website. It is also my first project released under the WTFPL License - (Do What The Fuck You Want To Public License). Feel free to download it and study how authenticating a Python program in Facebook works - using the official Python SDK. A wall post looks like this one: Dear John Doe, because of the current and future ways FB will handle our privacy, I decided to unsubscribe. I hope to stay in touch with you: e-mail/Google Talk: d AT danielgraziotin DOT it mobile: +393400788910 Skype: dgraziotin Twitter: @dgraziotin Best Regards (Message sent automatically) I am currently waiting a couple of days to be sure that all my ~450 Facebook “friends” read the wall post I sent. Then I will completely delete my Facebook profile. This is different from deactivating the account. Facebook will then DELETE the whole contents related to me. Meanwhile, as time goes, I am realizing how many stupid things people do on Facebook - including myself. But we fear leaving Facebook. We fear to loose contact with people, to be excluded from events. Just think about this: did you have friends before opening a Facebook account? Do I really have 450 friends? No. Now they all have the opportunity to contact me. If they “forget” me, then screw them, they are not really friends. By the way, I am sure that nothing will change. I will continue to hang out with the (surely much less than 450) persons I used to. Because they care of me. And I will never waste precious hours on that website. Do you know how do I feel now? I feel like I’ve just left an obsessive, intrusive girlfriend. I feel free. Look at this beautiful Youtube video: What it’s interesting about this event is that more than 10 Facebook friends are now leaving FB, too. Many other contacted me in order to know something more about my farewell reasons. Please note that I did not ask anybody to leave Facebook. I do not use a commenting system anymore, but I would be glad to read your feedback. Feel free to contact me.
OPCFW_CODE
Can I play Fallout 3 without any background to the Fallout Series? I have heard the Fallout 3 is awesome. I want to play it, but I am wondering how much background I will need? I played Daggerfall, Morrowind and Oblivion (also by Bethesda) and I think I could have played Oblivion without having played the other two, but I don't know if they did the same with the fall out series. See also: http://gaming.stackexchange.com/questions/798/are-the-expansions-on-fallout-3-necessary-for-a-full-grasp-on-the-story -1 incredibly subjective. Just a note, Fallout 1 and 2 were NOT developed by Bethesda. I consider myself a hardcore fan of Fallout series, but for me F2 was the last true Fallout game. IMO the latest additions to the series are good, BUT lost a lot (in terms of atmoshpere and replayability) due to modern technologies/trends in gaming industry. As I pointed out here game developers can't assume that everybody has played the previous games, so apart from superficial things and perhaps a few bonuses, sequels are usually pretty much stand-alone. There will be exceptions. As far as i know the stories don't intertwine too much, just that it has the same setting: a post apocalyptic environment with a touch of 50s. You will encounter some items that are well known for the series e.g the bobble head, but all in all you don't have much to worry about. Personally I've played Fallout 3 without playing the other 3 games and didn't feel the least out of the loop. +1 I played Fallout 3 without playing any others. I loved it and there was nothing I felt I was missing out on. Fallout 3 is set about a century after Fallout 2, and about 2500 miles away, in the Washington DC region, whereas the original two Fallouts were set on the west coast. While there are a few callbacks and references, you need not have any experience with them to enjoy the game - in fact, your character by definition, is incapable of having any knowledge of the various factions and characters being referred to, having grown up in an ostensibly sealed environment with no contact with the outside world. What about "New Vegas"? New Vegas is unconnected to the events of Fallout 3, being set back out west, and another decade later. Because it is being developed by the team from FO 1 + 2, and set nearby to those games, I would expect substantially more callouts to the previous content, but the large time gap ensures that it should still remain accessible to a newcomer, in spite of the fairly large number of returning classic Fallout characters and factions. I've been paying fallout since the first one got out. Now I have all of the games. As said previously, it's the same setting, but a whole new story in all the Fallout games. For example, in Fallout 3 you starts as a new human in a "vault". You grow up there, and then the main story begins. In New Vegas, you starts in the role of a middle-aged man, a whole new man than before. Then a new story begins. The beginning of New Vegas's story, is in the clip at the start of the game. ;)
STACK_EXCHANGE
Do you need GPT for legacy MBR or GPT? GPT. If this gives you problems, you can aim for the old MBR. However, don’t complain. Note that GPT is often required if your shoe size is over 2TB. Don’t see what you need? Check out the full list or 4 . Email me if I haven’t discussed it, I’ll add it to the appropriate list. *NOTE. I will try to add to the list only those questions that I know I can solve. Is SSD MBR or GPT? There is no direct relationship between your SSD usage and MBR or even GPT. That being said, it’s best to use GPT by default on any UEFI-based computer. If you are using an SSD with a good BIOS based computer and want to be able to boot from the HDD, you should choose MBR. Should my SSD be MBR or GPT? There is no connection between using an SSD and choosing MBR or GPT. That being said, you’re better off using GPT as the newer criteria on any UEFI-based computer. If you’re using an SSD with a BIOS-based computer and want to use an MBR drive, your only choice. Should I initialize my SSD as MBR or GPT? You have to decide whether to initialize any disk you are using to MBR (Master Boot Record), GPT or (GUID Partition Table) first. However, after precious time has passed, the MBR may not be able to meet the performance requirements of the SSD or your storage resources. Is SSD a GPT or MBR? Most PCs use a Partition Table (GPT) type GUID for HDDs and SSDs. Gpt is more reliable and is back for volumes larger than 2TB. The old Master Boot Record (MBR) disk type is used by 32-bit PCs, older PCs, and removable drives such as memory cards. How do I initialize new SSD MBR or GPT? Right-click on the unknown drive that is your SSD and select “Initialize Disk”. Step 7. In the window, select the MBR and GPT for the SSD and click “OK” with the mouse. Should my boot SSD be MBR or GPT? That being said, you’re better off using GPT as the default for beginners on any UEFI-based computer. If you are using an SSD with a certain BIOS-based computer and want to boot from the hard drive, MBR is your only choice. Because SSDs tend to be much smaller (2TB), the MBR capacity limit plays little to no role. Should I set my SSD to MBR or GPT? - In short: An MBR can provide four primary partitions; The GPT function can support 128. - In short: MBR can support up to 2TB; Supports gpt up to 9.4 ZB. - Summary: GPT is very error-tolerant. - Summary: MBR is usually good for running older systems; GPT is more suitable for fancy computers. - In short: use GPT. Vijay is a tech writer with years of experience in the Windows world. He’s seen it all – from simple problems to catastrophic system failures. He loves nothing more than helping people fix their PCs, and he’s always happy to share his wisdom with anyone who needs it. When Vijay isn’t fixing Windows problems, he likes to spend time with his wife and two young children. He also enjoys reading, playing cricket, and watching Bollywood movies.
OPCFW_CODE
How to filter user input: An overview If you make web sites, online apps, or even just your own personal blog, chances are that you've heard the phrase "Don't trust user input!" This is one of the key security concepts about the Internet, and the failure of web developers to adhere to this principle is the number one reason sites get hacked, users get infected with malware, and web pages get defaced. There are many types of user input, some of which may not be all that obvious. There are also a lot of books and articles that focus on one particular aspect, such as cross-site scripting, cross-site request forgery, or SQL injections. Here, we're going to go over each type of user input and the basic security checks that you should do when you create any type of online piece of code. By following these simple concepts from the start of your development process all the way to the end, you can ensure that the result will be much safer from potential threats. What should you learn next? What should you learn next? Cross-site scripting (XSS) Cross-site scripting, referred to as XSS, is a fairly popular way for trolls and script kiddies to exploit a vulnerable web app, whether it be an online forum, a commenting system, or any site which accepts user input. XSS by itself doesn't exploit the actual server; instead it exploits other users of the site. Because the server isn't affected, a lot of developers used to consider this a low-priority issue. Unfortunately, as web apps have become more popular and more complex, the things that can be done with XSS have become more serious. So all you need to do to solve this problem is look for four tags: <, >, %3C, %3E. The last two are the encoded versions of the first two tags. Be sure to check both upper and lower cases! By replacing them with < and > you tell the browser to print the actual tags, instead of interpreting what's inside as valid HTML. Cross-site request forgeries (CSRF) CSRF is a fairly new term that invokes a fairly ingenious way to exploit persistent logins. The idea is for an attacker to craft a malicious web site that will exploit another site where you may already be logged in. For example, let's say you go to your banking site at www.bank.com and stay logged in. Then, you browse the web and end up at a malicious page designed to attempt a CSRF on that bank's site. What the page will do is attempt to redirect your browser to a specific query on the bank's site. Your browser will try to load that resource, unknowingly triggering an action on the site, which will be sent with your own cookies for the bank. If you're still logged in, then that action will be valid. For example, the site could have an img tag which, instead of containing the URL to an actual image, will try to load: When we hear about web sites being hacked, the number one method that bad guys seem to use is SQL injections. This is a very powerful way to gain access to unauthorized information or to inject your own code into a site. Yet, it's a very easy thing to protect yourself against. Let's first see how a SQL injection works. Basically, all modern sites work off of a database. This means that all the content on the backend is stored in a separate place, usually a database server running MySQL, Microsoft SQL, Oracle, or any of the other database servers out there. Your site communicates with that database by connecting to it and then sending SQL commands. This is a very simple and efficient way to store and retrieve data, but it does have one huge drawback. By using those SQL commands, you can do anything you want with the data, very quickly. This means it's very important that only your scripts can send these commands. If a user can, somehow, make your script send a random query to the database, that's an SQL injection, and then all bets are off. These injections are done by simply adding SQL commands to an input field. As your script tries to add that information to the database, or check the database for confirmation, the user can escape out of that initial command, and run their own queries. For example, let's say you have a login script that accepts a user name. Then, the script checks if the name is valid like this: "SELECT * FROM users WHERE name='" . $input_name . "';"; This would be a fairly common way to check a database from a PHP script. The problem is that it's incredibly insecure. All the user has to do is use the following code: This will escape the query thanks to the first ' character, and then check whether or not 1 = 1. Since one always equals one, that means the database will always say that this user is valid, and log the malicious user in. Now you may be tempted to add sanity checks and, if you're thorough, that may work. But the problem is that there are many different ways to inject SQL code, and you can't test for every possible combination. Consider the following equally valid SQL injection code: You would be hard pressed to detect that. So instead, the way to defeat SQL injections is to create prepared statements. When you add user input to a database query, you should always do so in prepared statements, like this: $stmt = $db->prepare("SELECT * FROM users WHERE name='?';"); This is a PHP example that does the same thing as the statement above but which doesn't allow SQL injections to happen. Sanitizing input types We've seen the most common types of exploits when it comes to trusting user input, but there are many other variations. For example, a user may not be trying to actually exploit your server or users, but instead may be trying to spam them. Even if you filter out HTML tags, they may still be able to insert a lot of useless junk in your commenting scripts. Or, someone may want to exploit your code in another way. For example, let's say you create a banking app, and you have a variable that accepts an amount to withdraw. What happens if the user sends a negative value instead? If the script isn't checking for that, then it may end up adding money to the user's account. Or if your web app uses integer values for some type of ID system, what happens if the user puts in a very high value, bigger than 64bits, and tries to overflow your integer? Or what if they add text instead of a number? This brings us to the basic concept of sanitizing input types. When you're making a web app, or writing any type of code that accepts user input, it's always useful to make sure the resulting values are close to what you expect. There are many checks you can do here. For example, if your ID numbers are always 8 characters long, then you could only take the first 8 letters from the input string. If you're expecting a positive number, then convert your input value to a number, and discard any negative values. If you're looking for a text based comment, then use a regexp function to remove any character that isn't a letter, number, space or punctuation. These little checks will ensure that many attempts will get thwarted. Finally, there are various other things you should consider when it comes to user input. One particularly risky thing you can do with a web app is to accept files from users. By allowing uploads, you're opening your server to many types of issues, but sometimes you may want users to be able to upload their own files. Maybe you want them to have their own profile pictures, or perhaps you want them to share videos to your site. Either way, special care should be taken here. The first thing you need to do is make sure you enforce a strict size limit. You don't want someone to upload huge files and fill up your server. Then, you need to make sure they upload only specific types of files. If you allow people to upload images to display on their profiles, someone could instead upload malware, and serve it to every browser that sees the user's profile. The way to protect against that is to check file types after upload, not just the file extension, but checking the file itself. For example, in PHP, you may be tempted to check $_FILES["file"]["type"], but that's user-submitted information, and it can be faked. Instead, use the finfo function. Another thing you should do is make sure you place all uploads in a specific folder on your server, and lock that folder down. For example, if you use Apache, then you can add an .htaccess file with the following code which will prevent scripts from being executed: AddHandler cgi-script .php .pl .jsp .asp .sh .cgi Of course, make sure that users can't upload a file of the same name, or with the name of another crucial file in that folder. In fact, .htaccess files (or the Web.config equivalent for IIS) can host a lot of useful security commands that you should take a look at. You can make it so your code is always executed in scripts, and not sent as HTML, in case you ever mistype something in them and they aren't seen as scripts anymore. You can also deny access to configuration or database files. And you can add restrictions based on where the user comes from; for instance, if they get redirected from another site when trying to submit to scripts that should only be called from your own pages. What should you learn next? What should you learn next? Finally, web application firewalls and programming frameworks have become very popular in recent years and can help you immensely with all of these issues. By standardizing all of your checks inside of a framework, you don't have to worry so much while you're actually writing code, and that's a great help. Take a look around the web for the available frameworks and WAFs that could work with your development environment. In the end, you need to make sure you protect both yourself and your users.
OPCFW_CODE
Why am I not starting in My Career NBA2k14 PS4 So a few games ago I got promoted to a sixth-man. Now it's been like 11 games, I play 27 minutes on the bench, but still don't start. I get a great performance rating every time, put up 30 points a game, 5th best pg in the NBA. I play for the Philadelphia 76'ers, and I got a better overall than the guy who is starting over me. So my question is why am I not starting yet? I've played like 45 games with them now... I'm even the top scorer by far in the team. Now yesterday started playing 28 minutes, played 15 games with 28 minutes, still not starting. AND now I'm the top scorer in the NBA. 70 games in the season, still same team, still not started. The couch keeps saying I will add more minutes, keep up the good work, but doesnt talk about starting me, he said keep up the good work like 10 times, when i get 28, minutes on the bench. And 28 is max, and he can't add more minutes, so why isn't he starting me? I think this is a glitch, I ran into this issue too in but was in 2k13. I was put in as a starter at some point, but after an injury I came off the bench every game, even though I get put in within seconds of the game starting and would play every minute after that. No matter what I did I wasn't chosen as a starter. Generally, most people who post on the forums say that they start starting somewhere between halfway and a full season. By your second year, you should be in the starting 5. 2K sports has said that they plan to release a patch that allows your player to start earlier. No word yet on when this patch will be released. Will I just start then, or will I need to work hard like I do now? As long as you are consistent with your numbers (points, rebounds, assists etc) you will start sometime between the halfway point and the end of the season. This information is according to most of the forum posts and other people's experiences with the game. The exact algorithm used is unknown, so it's hard to pinpoint exactly. Is avg 40 points a game, 2 assists, 2 rebounds a game good enough to start? I would work on your other stats. Try to be more well-rounded. 20-25 points a game, 10 rebounds, 10 assists. Getting triple doubles is a way more impressive than 40 points a game.
STACK_EXCHANGE
#TYpginRj-2o #JugnuKids #NurseryRhymes #and #BestBabySongs ▶Title Online video clips : Jingle Bells Jingle Bells | Xmas Audio &amp Audio for Youngsters | Nursery Rhymes by Jugnu Youngsters . ▶▶Duration : 28:two . ▶▶▶Published at : 2019-12-23 19:10:46 . ▶▶▶▶Souce: Online video clip Share Youtube For channel ➡ Jugnu Youngsters – Nursery Rhymes and Best Youngster Audio Hi Youngsters! Notice this Jingle Bells Jingle Bells Nursery Rhymes Assortment by Jugnu Youngsters! We hope you recognize viewing this animation as a good deal as we did making it for you! This small kinds tracks assortment is superb for comprehending the alphabet, figures, styles, shades and tons significantly much more. Content material Researching! A new compilation film, this kind of as a solitary of our most newest tracks, 00:05 – Jingle Bells, Xmas tune 02:31 – Halloween Observe for small kinds 03:45 – Elephant tune for small kinds See significantly much more nursery rhymes in playlist +A good deal much more Nursery Rhymes &amp Youngsters Audio – ABCs and 123s Jugnu Youngsters Nursery Rhymes and Youngster Audio | The Wheels on the Bus Best Job &amp Occupation Audio for Youngsters | What Do You Want to Be | Youngsters Fake Have interaction in | Nursery Rhymes Ideal Nursery Rhymes and Youngsters Audio Assortment Nursery Rhymes for Youngsters Nursery rhymes – Tutorial and comprehending film assortment for small kinds Nursery Rhymes and Youngster Audio in 3D Jugnu small kinds world The area small kinds can be content and Secured At Jugnu small kinds, we are devoted to making excellent high quality, tutorial movie clips for small kinds aged two – 5 by making classic nursery rhymes, small kinds tracks and tales with Next &amp 3D animations with the aid of singing and dancing lovable individuals, which can aid small kinds learn all about letters, figures, styles, shades, animals, and so a good deal significantly much more! Jugnu small kinds is a a solitary stop reply for small kinds training and understanding like kindergarten the area mom and father learn satisfying, excellent high quality and tutorial materials to teach and get pleasure from with their small kinds. Your small kinds will adore our enjoyable individuals and vivid animated movie clips even however viewing ABC, Wheels on the Bus, Vehicles Show and significantly much more every day plan film like Tub Observe, Sleeping Observe, Brushing Audio and so on. We are making excellent training and understanding materials for small kinds like chu chu tv. Jugnu Youngsters Make Online video clips for – Educating superb routines and obligation for small kinds Instruct your youngsters superb manners and discussion knowledge Find out the Lovely World Nursery rhymes in English,Piosenki dla dzieci po angielsku, canciones en inglés para niños,เพลงภาษาอังกฤษสำหรับเด็ก, Comptines en anglais,Kinderlieder in Englisch, Lagu-lagu anak berbahasa Inggeris, Musik Untuk Anak,Engelse kinderliedjes, Músicas em inglês para crianças, Gyerekzene, barnvisorna på engelska, 英文兒歌, Písničky v angličtině, أناشيد أطفال باللغة الإنجليزية, अंग्रेजी में नर्सरी कविताएं, Barnerim på engelsk, Canzoni for every single bambini in inglese
OPCFW_CODE
Day 1 | Day 2 The New Alphabet School is a collaborative format for artistic, curatorial, poetic and activist research practices. Over the course of two years and in eleven editions, the School will open up a space for research going beyond academic and disciplinary boundaries. Workshop participants become a part of the New Alphabet School and are invited to contribute to the programming of all subsequent editions. For the edition #Coding, two workshops will investigate the coded and algorithmic knowledge inherent in networking structures and data sets and explore the possibilities of re-mapping or un-training existing data-body relations. In association with The Common Room The Common Room Foundation 287-288 Dhan Mill Compound, 100 ft. road 110074 Chattarpur, New Delhi 9.30 am Welcome 10–10.30 am Introduction 10.30 am–6 pm Workshops 6–7 pm Final Discussion Code, Layers, Infrastructures Current centralized, entangled corporeal and governmental internet infrastructures tend towards exploitation and surveillance. By experimenting with embodying and coding networks that are decentral, anonymous, temporary, specific and/or collective, this workshop contributes to the digital commons. Together, participants will use the metaphor of the layer to question existing networked structures and collectively imagine and develop alternative tools of navigation. Convened by artists, coders and game designers, this workshop will look at individual layers of computational abstraction and small-scale networks within our everyday use of connective devices (such as smartphones, laptops, smart speakers, wearables). Starting from the everyday layers of computational networks, participants of this workshop will go on computational walks in the neighborhood and map the interconnections and relations between small-scale and global-scale infrastructures and networks. The Untraining Playground: Edit-a-thon on the Metabolism of Bodies and Data The workshop will engage participants in the exploration and re-editing of 3D point cloud data to create a collective un-training data set. Point clouds are sets of data points in space and can create 3D models of certain data structures for visualization and – in this workshop – imagination: In a ubiquitous testing field of mutual measuring and optimization, amplified by artificial intelligence as well as what some call artificial stupidity, this workshop takes a look into the raw data of the 3D meshes of everyday digital infrastructures. With the help of smartphones and software, sensors and senses, participants will not only scan common data, but also the complex interconnected meshes between agencies of molecular bodies, an Internet of Things, mobile phones and larger infrastructures. These scans will be captured and translated into common editable point clouds. What are the conditions of such a data set and how do they differ from traditional training data sets for AI? Which speculative and queer utopia can be produced in these assemblages and how sustainable are they? Participants will be provided beforehand with preparatory material such as 3D applications and selected text material. In order to participate, no technological in-depth knowledge is required. Day 1 | Day 2
OPCFW_CODE
//Back-end var cSharp = 0; var rails = 0; var android = 0; var design = 0; var firstQuestion = true; var questions = { question1: { text: "What kinds of books do you prefer?", answer1: "Text books and technical manuals", answer2: "Do it yourself books", answer3: "Anything small, fun, and accesible", answer4: "Whatever is most visually appealing", nextQuestion: "question2" }, question2: { text: "What's your ideal working environment?", answer1: "A big corporate office", answer2: "A small office space", answer3: "Home office", answer4: "Artist's studio", nextQuestion: "question3" }, question3: { text: "What colors do you prefer?", answer1: "Black and white", answer2: "Red, red, red!", answer3: "Greeeeeen", answer4: "All colors are great, provided they're used correctly", nextQuestion: "question4" }, question4: { text: "How do you know you've created something great?", answer1: "No one even knows it's there because it works so perfectly", answer2: "It's functional with minimal time commitment", answer3: "Everyone wants to play with it", answer4: "Beauty is in the eye of the beholder; what does great really mean anyway?", nextQuestion: "question5" }, question5: { text: "Which company sounds the coolest?", answer1: "See Sharp: Tactical Eyewear", answer2: "Rubies on Rails: Gemstone Shipping and Logistics", answer3: "And Roid: Personal Trainers", answer4: "D-Zine: We were a zine before e-zines", nextQuestion: "suggestion" }, suggestion: { cSharp: "C#", rails: "Ruby on Rails", android: "Java/Android", design: "Design" } } function updateTotals(answer) { if (answer === "1") { cSharp++; } else if (answer === "2") { rails++; } else if (answer === "3") { android++; } else { design++; } } function checkTotals() { var totals = [cSharp, rails, android, design]; var choices = ["cSharp", "rails", "android", "design"]; var max = totals[0]; var maxIndex = 0; var maxArray = [0]; for (i = 1; i < totals.length; i++) { if (totals[i] === max) { maxIndex = i; maxArray.push(maxIndex); } else if (totals[i] > max) { max = totals[i]; maxIndex = i; maxArray = [maxIndex]; } } if (maxArray.length === 2) { return suggestionsArray = [choices[maxArray[0]], choices[maxArray[1]]]; } else { return choices[maxArray[0]]; } } function giveSuggestion() { var totals = checkTotals(); if (Array.isArray(totals)) { var response = "<a href=\"https://www.epicodus.com/portland/\">It sounds like you should check out our " + questions.suggestion[totals[0]] + " track, or our " + questions.suggestion[totals[1]] + " track here!</a>"; return response; } else { var response = "<a href=\"https://www.epicodus.com/portland/\">It sounds like should check out our " + questions.suggestion[totals] + " track here!</a>"; return response; } } //Front-end $(function() { function populateFields(nextQuestion) { var question = questions[nextQuestion]; $("#question").text(question.text); $("#answerText1").text(question.answer1); $("#answerText2").text(question.answer2); $("#answerText3").text(question.answer3); $("#answerText4").text(question.answer4); $("input:radio[name=answer]:checked").removeAttr("checked"); } function unCheck() { $("input:radio[name=answer]:checked")[0].checked = false; } $("input:radio").click(function() { $("button[type=submit]").removeAttr("disabled"); }); $("form").submit(function(event) { var answer = $("input:radio[name=answer]:checked").val(); if (firstQuestion && answer === "3") { $("#question").text("Thanks for checking us out!"); $(".panel-body form").remove(); $(".refresh").show(); } else if (firstQuestion) { nextQuestion = "question1"; populateFields(nextQuestion); $("#answer4").show(); nextQuestion = "question2"; firstQuestion = false; unCheck(); $("button[type=submit]").attr("disabled", "disabled"); } else if (nextQuestion === "suggestion") { updateTotals(answer); var finalSuggestion = giveSuggestion(); $("form").remove(); $("#question").text("Thanks for taking our quiz."); $("#suggestion").prepend("<h3>" + finalSuggestion + "</h3>"); $("#suggestion").show(); event.preventDefault(); } else { updateTotals(answer); populateFields(nextQuestion); nextQuestion = questions[nextQuestion].nextQuestion; unCheck(); $("button[type=submit]").attr("disabled", "disabled"); } event.preventDefault(); }); $(".refresh").click(function() { location.reload(); }); });
STACK_EDU
First thing, you need to know what you need to create?? 1. You need to create a pop3 client. Its possible using php. 2. as you are planning to access gmail account, in that case you also need to implement ssl communication so that you can read the data actually, As a matter of fact you can create a global pop client that will be able to access any pop server. One thing that struck me when reading your code is you don't care about the actual length of the file: you request a read of 4095, however the file length could be more (in which case you ignore the extra data), or less in which case you are looping over non-existing bytes. The $i < BUFSIZ test is bound to be wrong. You are right but this might cause some data error in last block of file but my problem is that i have file changing data in whole entire of my file. Let ignore last block, in other block we shoud have any problem in converting file towice but there is some problem. This code in c++ working well but in php i don't know what happen if I accesing my data with index ([$i]) and change it with xor operation it looks that my original data will be growup. How can I prevent click jacking in php? I just have the concept of click jacking , If i implement click jacking in php from code search from internet , then how can i know it is implemented? Kindly clear my concept and tell me Click-Jacking Preventing demo Code in php... I am waiting for your positive and Quick response... Currently I'm develop an application same as CRM. To generate report in PDF,i'm using FPDF as my tools. The problem is, to retrieve the data from database,I create temporary file e.g:dldata.php and call that page in my pdf viewer file. U can access the data and easily view the actual data(if you know the address).By using explode(),I retrieve the data and display it on my pdf viewer page. The dldata.php will be delete (using unlink()) after user load the quotation form. Below are the flow to create quotation in pdf: fill form data(createquotation.php)->insert data to database->create session,dldata.php & write data into it from mysql->redirect page to viewquotation.php(here I call dldata.php)->click print quotation or back to createquotation.php & destroy dldata.php. Anyone that have experience using FPDF to advise me about this issue. I'm afraid the conflict arise when 2 or more users doing the same process at the same time. TQ. I have no problem with either download or creating the pdf file. The problem is method that i used. I used this code to call dldata.php after i create it. $pdf = new PDF(); // Column headings$header = array('Product ID', 'Product No', 'Product Description', 'Subtotal#39;); // Data load from the file and it will be=>1;#13,21321;Pencil;$1.00 $data = $pdf->LoadData('dldata.php'); After load back to the createquotation.php,I destroy dldata.php file. I'm afraid, it will be issue if more than 2 user do this process at the same time. It will be last in,first out. Only The last data store will be appear. Can I just use dldata.php without delete it and call the data without any user can access it by type the url. Please don't do this. Unless your post adds value to a question, don't go linking to external websites. If you do, your account will be banned and your IP will be tracked. Repeated violations will see your IP being permanently banned.
OPCFW_CODE
Decred is unique crypto, and if you are looking to generate your own DCR coins, you need to know that there are several Decred pool options that you can choose from. Decred (DCR) is a blockchain-based cryptocurrency that employs decentralized governance that ensures the network is run autonomously but with miners and coin holders voting on and enacting improvements. Decred was forked from the Bitcoin protocol, which means that it inherited several features from its parent coin. Decred launched its mainnet on February 8, 2016. Crypto differs from other cryptocurrencies when it comes to its mining protocol, as it uses both Bitcoin’s Proof of Work and the Proof of Stake consensus algorithms. Proof of Work mining algorithms constantly increase in difficulty, and as blocks are generated slower through this protocol, more people have been looking towards the Decred Proof of Stake mining pools (Stake Pools) to generate new coins. New blocks are created by proof-of-work miners every 5 minutes, releasing new Decred coins into the network. This block reward is divided into three parts: 60% is given to the PoW miner who discovered the block, 30% is distributed to the PoS voters on that block (6% to each of the 5 voters), and 10% is allocated towards development funding. The block reward was initially set at 31.19582664, and every 6,144 blocks (approximately 21.33 days), this reward is reduced by a factor of 100/101. Stakers thoroughly verify blocks mined through PoW to check if they are written correctly. For their work, they each receive 6% of the Block Reward. PoS Voting Rights are allocated through tickets that can be purchased inside the wallet, and the price is calculated by an algorithm that maintains the difficulty of the protocol at the same rate. This means that if more people participate in PoS Voting, the tickets will increase in price. Mining Pool Factors When looking for Decred pool options, there are several factors you should look into. Most of these criteria apply to almost all mining pools, but there are some that are more favorable in certain aspects. - Fees: Most mining pools apply a fee between 1% and 2%, but you can also find some which have higher or lower charges for their services. - Server location: It is recommended that you select a pool that has its servers physically close to your geographic location, as you will have better uptime and hash power which, in turn, will produce better mining results. A majority of pool mining servers are located in Asia, Europe, or North America. - Supported algorithms: As Decred was created by forking from Bitcoin, its mining algorithm is based on the SHA 256 algorithm and, as such, ASIC miners can also be sued for mining DCR. Decred can be mined through PoS, so you can also look for these types of pools. - Reputation: Smaller pools are usually more recommended in terms of decentralization, but you should check the community opinion if the pool is trustworthy and offers payouts to its users. - Hash rate: Pools with higher hash rates have a higher chance of producing more favorable profits. - Payout system: Decred mining pools feature different payout systems, such as Maximum pay per share, Pay per share, Share-based, Score-based, and many others. Choose the one which is most convenient for you. - Pool uptime: Pools that maintain a 100% uptime are the most recommended. - Minimum payout: lower payouts allow you to withdrawal your earnings much faster. Decred PoW Pools Below are some of the best Decred pool options for PoW mining: Suprnova is one of the biggest Decred mining pools, which also supports the mining of 4 dozen different coins in addition to DCR. Its servers are located worldwide. Suprnova rewards its miners through a Proportional (Prop) payout system and charges no fees. CoinMine is a unique pool, as it enables miners to mine anonymously through their special dashboard that does not require users to sign up on the platform. The pool employs the PPLNS payment system to reward its users. It also does not charge any fees and servers that are distributed in various regions all over the world. The minimum payout amount is set at 0.1 DCR, meaning you will only receive your earnings after you surpass this amount. Luxor Mining has recently launched its Decred pool that offers a 1% PPS scheme. They currently have four servers that you connect to with two in the US, one in Europe and one in Asia. Luxor has a global network of servers that ensure optimal hash rates. F2Pool is a Chinese mining pool that is one of the oldest mining pools in operation. Out of all the Decred pool options, F2pool supports the most cryptos, including BTC, LTC, ZEC, ETH, ETC, SC, DASH, XMR, XMC, XZC, and XVG. The pool also supports merged mining for Namecoin (NMC) + Syscoin (SYS) for BTC mining and Dogecoin (DOGE) for LTC mining. The reward is 3% through PPS or PPS+. Fees are between 2% and 3.9%, which makes this pool one of the most expensive on our list. If you made earnings over 0.1 DCR, you will be able to withdraw your mined coins. Decred Stake Pools A stake pool delegates your voting powers to a third party (the pool), and it is ideal for people that want to vote using Decred’s proof of stake consensus but can’t, because they either cannot keep a wallet constantly unlocked, or do not have a stable Internet connection. To take part in Proof of Stake voting and the stake pools, you are required to have: - a wallet that supports stake pool voting; - sufficient DCR to buy the voting ticket. The two wallets that support stake pool voting are: - Decrediton – GUI wallet for Windows/macOS/Linux; - dcrwallet – CLI wallet for Windows/macOS/Linux. Keep in mind that the necessary amount of DCR to buy a ticket can rise to $3,000. PoS Decred Pool Options Stakeminer – has more than 17% of the network votes, and its fees are set at 1%. Ubiqsmart – the pool has around 400 users, and a 0.95% fee, with a 0.5% missed vote rate. Stakepool – has more than 7% of the network votes, with a VSP fee of 2%, and a missed stake vote probability of 0.05%. UltraPool – a European based pool of over 500 users, with a 1% fee and a low missed vote rate of 0.08%. Stakeynet – this new pool has under 200 users, and it has a 1% fee, and a very low 0.05% missed vote rate. With this, we conclude our article on the best Decred mining pool options for both Proof of Work and Proof of Stake mining. We hope that this has been of help to you and your DCR mining endeavors. Featured image: cryptobit.media
OPCFW_CODE
Today's post introduces Guillermo Del Pinal and Shannon Spaulding's paper, "Conceptual Centrality and Implicit Bias", published in Mind and Language. Guillermo Del Pinal is a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor, and Leibniz-ZAS, Berlin. He works in the philosophy of language, mind and cognitive science. His main area of research is the relation between language and general cognition, focusing on topics such as the degree of modularity of language, and the role of natural logic within the language system. Shannon Spaulding is an Assistant Professor of Philosophy at Oklahoma State University. Her general philosophical interests are in the philosophy of mind, philosophical psychology, and the philosophy of science. The principal goal of her research is to construct a philosophically and empirically plausible account of social cognition. She also has research interests in imagination, pretense, and action theory. How are biases encoded in our representations of social categories? Current discussions of implicit bias overwhelmingly focus on salient associations between target features and representations of social categories. Salient associations track the prominence or availability of an association between a category (e.g., WOMAN) and a feature (e.g., +FAMILY ORIENTED). These are the sorts of associations probed by the Implicit Association Test and similar priming tasks. While these kinds of associations likely encode biases which affect judgment and behavior, we believe that other kinds of biases may affect social cognition in more dramatic ways. In Del Pinal and Spaulding (2018), we argue that some social biases are likely encoded in the dependency networks that are part of our representations of social categories. Dependency networks encode information about the inter-dependencies and degree of centrality of features in a conceptual representation. For example, +MADE FOR SITTING is a central feature of CHAIR because various other features of chairs depend on their being made for sitting. Importantly, that a feature is central for a category doesn’t entail that it is also salient: +MADE FOR SITTING, although central, need not be more salient than other (less central) features such as +HAS A BACK or +FOUR LEGS. In our view, many socially relevant biases are likely encoded in dependency networks, and can’t be picked out by measures that track merely salient or typical features. Why does this matter? Why should we care about how, precisely, biases are encoded? Here is why … Firstly, features that are merely salient are relatively unstable across contexts. This is a lesson from the extensive literature on conceptual combination. For example, +MANE is a salient feature of the representation LION. At the same time, +MANE isn’t preserved under even trivial conceptual combinations involving LION: consider BABY LION, FEMALE LION, and TRIMMED LION. This basic lesson can be generalized. Suppose you are interacting with baby lions at a nursery. To guide your thoughts and actions there, you likely would use a subcategory of LION that corresponds to something like BABY LION, and automatically drop the feature +MANE. This instability of merely salient features sheds light on debates about why scores across various measures of implicit bias only weakly correlate Nosek, Greenwald & Banaji 2007). Since most measures of implicit bias detect salient associations, their sensitivity to contextual manipulations may be due to the contextual sensitivity inherent to merely salient associations. Consider some examples. White American participants tend to display significant anti-Black implicit bias on race IATs, but Govan and Williams (2004), report that changing the subcategory of the exemplar affects subjects’ results. Typically race IATs use generic pictures or names of Black and White women or men. Govan and Williams report that when the stimuli represent subcategories – famous and liked Black men, and famous and disliked White men – the bias can be eliminated. Similarly, Wittenbrink, Judd and Park (2001) report that White subjects exhibit less negativity in response to Black faces when they are presented in the background context of a church interior. In our terminology, the association between the category BLACK and negative features does not survive sub-categorization into more specific subclasses or individual members of the class. More generally, the unstable behavior of merely salient associations in conceptual combinations and subcategorization sheds light on the theoretical and practical limits of IATs and similar measures (Lai et al 2014; Lai et al 2016) Secondly, centrally encoded features are relatively stable. In particular, the more central a feature is for a concept, the more likely it is to survive into sub-categorizations. The feature +BORN OF LION PARENTS is a central feature of LIONS, hence will be more stable across composition and sub-categorization than the less central but more salient feature +MANE. To illustrate, note that our mane-less YOUNG LION, FEMALE LION, and TRIMMED LION all still inherit the feature +BORN OF LION PARENTS. Similarly, suppose we are at the lion nursery, now operating with the more specific representation BABY LIONS, we can agree that although we will not be looking out for manes, we still would assume that the baby lions were born in the usual way. Clearly, then, knowing the degree of centrality of features that encode biases is crucial to determine the biases’ wider role in social cognition. Assume f is central to our conception of WOMAN, and in particular more central than to our conception of MAN. Being central for WOMAN, we expect that f will be resilient through conceptual combinations and sub-categorizations. For example, f will be more likely to survive into LAZY WOMAN and SUCCESSFUL WOMAN than into LAZY MAN and SUCCESSFUL MAN. Since centrality and saliency can disassociate, these could obtain even if measures of saliency determine that f is not more salient for WOMEN than for MEN. Still, if f is uniquely central for WOMEN, and is furthermore is the locus of a bias, we expect that it will be resilient across various contexts, conceptual combinations, and subcategorizations. view that some socially relevant biases are encoded in dependency networks has wide ranging implications. Empirically, we need to imaginatively adjust and refine our current measures of centrality to discover new social biases and, importantly, determine the degree of centrality of previously known ones (for an attempt, see Del Pinal, Madva, and Reuter (2017)). Philosophically, we need to reconsider foundational questions about the relation between biases, beliefs and individual responsibility, esp., if we are convinced that some biases are centrally encoded, uniquely resilient, and unlikely to be picked out by measures like the IAT (see Del Pinal and Spaulding 2018, and
OPCFW_CODE
One of those teaching moments came when the boys encountered a snake. The boys did everything right by standing back and summoning an adult to identify the snake. A couple of us even got some decent pictures. (On a humorous note, I think this is the first time that I've been in a group that encountered a snake and NOT been the most scared adult. Snakes and I don't get along at all.) The first obvious concern was whether the snake was venomous. When no one else in my group knew how to identify it, I realized just how many folks have no information on how to tell if a reptile is venomous or not. |The snake in question.| With that in mind, here's a handy overview. There are only five types of poisonous reptiles native to the USA: Four are snakes, one is a lizard, and all of them are quite distinctive if you know what to look for. |A coral snake. Note the distinctive colors. Image from nature.com| - The odd man out in American poisonous snakes, coral snakes have slender bodies and heads. - Their distinctive coloring pattern of red, yellow, and black bands is mimicked by some nonvenomous species, but remembering "Yellow, Red, Stop!" helps in identification, as only the poisonous snake has the yellow and red bands touching. - Coral snakes are less aggressive than other species, with smaller fangs and a reclusive manner. - Their venom is incredibly potent. A bite from a coral snake requires immediate medical attention and has a higher instance of fatality than other species. Coral snake antivenin is also in very short supply, and is no longer being produced. |While rattlesnakes are extremely varied, they all have rattles in common.| Image from sdsnake.com - They have vertical pupils, but the odds of noticing this on a live snake are slim. - They have a blunt tail with bony rattles, which is the source of their name. They can shake these rattles very quickly when threatened, making a buzzing noise that sounds like nothing else. - They have broad, triangular heads and thick, fat bodies. - They also have sharp, pointed scales, in contrast to the smooth, sleek look of non-venomous species. |A cottonmouth making a threat display. uga.edu| - Also known as the water moccasin, the cottonmouth is native to the southeastern United States. - It is semi-aquatic, commonly found along streams and rivers. - It is a very strong swimmer, able to traverse large bodies of water. - Like other vipers, it has a fat body and a broad head. - They are particularly large snakes, with adults reaching and exceeding 3 feet in length, and some large examples weighing in at 10 pounds. - They are very dark in color, approaching black in full-grown adults. - They behave more aggressively than other snakes, and will eat virtually any animal, including small alligators. - Bites to humans are frequent, although not often fatal. - Cottonmouth venom breaks down tissues around the bite, sometimes requiring amputation. It is however readily treatable with antivenin. |Image from agfc.com| - Copperheads are the least venomous group of snakes. - Their bite injects only small amounts of venom, and frequently injects none at all. - They range throughout the southern and eastern US, with a preference for deciduous woodlands (areas with leafy trees). - Like other venomous snakes, the copperhead has a broad, triangular head and fat body. - The copperhead is a master of camouflage, with a dirt-colored skin and the tendency to freeze when threatened. - This habit actually leads to bites, as the snake is frequently stepped on, or startled by a nearby step. - Native to the desert Southwest, the Gila Monster is the sole venomous lizard in the USA. - They're also the largest native lizard, with some specimens reaching two feet long and weighing five pounds. - They have pebbled scales and body coloration, with bands of black alternating with orange-to-pink shades. - While their venom can cause pain and swelling, it occurs in such small quantities that it is not considered lethal to healthy adult humans. - Gila monsters are very slow-moving and easy to avoid. Our snake at camp was a common rat snake, great for rodent control and completely harmless to us. The boys got some neat pictures and a fun brush with nature, and the snake slithered off to find some chow. Discover Life has a great utility for snake identification in the field. Know your venomous snakes so you don't get bitten.
OPCFW_CODE
Hello, I help moderating a Discourse programming community at discourse.haskell.org. Today we got feedback from a user. I will summarise it: - He is a long time lurker. - He joined discourse to post an RFC for a library he was developing. - He could not post more than two links in his message (level 0 trust). There have been some suggestions on the discussion: But I am not looking for hard and fast rules, more about your opinion and experience on the inevitable level-0 vs. spammer friction and, if your community changed anything to the onboarding process, what you tried. Thanks everyone and Discourse team for this very useful piece of software! I’m hoping that eventually Discourse AI can automatically detect spammy / problematic posts from new users, so that most tl0 and first-day limits could be relaxed. I do think the system could be presented more clearly to new users, but — - Information overload at the beginning can be overwhelming and a barrier - Spammers are insideous, and putting limits up front gives a road map for skirting those limitations. Personally, when I recognize someone or just plain have a sense that a new user is acting in good faith, I manually bump accounts to tl1. That doesn’t scale, of course, but… it’s something! i think a large factor in this is the type of community the particular forum is hosting. for mine, i definitely need those limitations on TL0 and even TL1. i actually use those lower trust levels as manual settings for when certain users request / need to have their access limited. perhaps more granularity is required in the TL settings? You could simply customise the error message to provide more info and a link to the TL doc. i am actually unclear on what this refers to? why do you think there is any inevitable friction there or are you speaking only of your own forum? My understanding is he refers to the unjustified and simply annoying restrictions “good” users have to go through (=“level-0”) vs. the usefulness of this system against “less good” actors (=“spammer”) Indeed that is correct, the balance of “onboarding good users” vs. “keeping malicious one at bay”. True, but those good users who are truely interested in joining the community will come back and overcome such resistance. Heavy filtering costs you bad actors(protecting your existing community) and some good contributors. IMO the trust system is pretty good. Most of our users bump to trust level 1 after reading 10 topics and 15 minutes read time. Have you experienced issues due to TL0 spammers? My experience is that the auto-detection from Discourse, as well as Akismet, are almost flawless. I have basically 0 false positives. Those who still post are quickly flagged, and their messages are hidden by the community, so it’s quite a non-issue. Might be otherwise in other communities, of course. Would like to thank everyone for their feedback! I will bring all of your suggestions to our mod team and see what fits better our community.
OPCFW_CODE
Where can I find a Buddist Monastery that practice hard training? I wish to master my mind, one reason is I'm so mentally weak. Like anything you want to master you have to work at it. Drill it into you. But practicing on my own hasn't work. I need a far away place that will force me to master meditation. If I get off track they will make sure I get back on track. Where is a monastery like this where I a foreigner can join for a year? Following are some pointers: https://www.dhamma.org/en/index http://www.internationalmeditationcentre.org/global/index.html http://www.buddhanet.info/wbd/ https://www.paaukforestmonastery.org/ https://forestsangha.org/ With regard to the 1st 2 links, there are 10-day courses. In the case of the 1st link, you can progress to longer courses up to 60 days. 3rd link lists many other monasteries and centres which some may allow you to stay up to a year. 4th links gives meditation centres in the Pa Auk tradition which is the best if you are looking to master Jhana. 5th link gives monasteries in the Thai Forrest tradition. Thank you my friend truly. Also thanks for giving me options Any meditation monastery has sufficient disciple for training. There is no need to think as extreme as you are. Wat Pah Nanachat in Thailand or any branch monastery. I would encourage you to take a look at mahamevnawa. They have many branches world wide. Below is link to help locate a branch. Worldwide Mahamevnawa Branches If you in the east coast of USA, I may suggest Mahamevnawa Buddhist Meditation Center of New Jersey A lame horse if whipped will not run faster. Based on your approach it may take you a long time to form discipline. In fact what does it have to do with a monastery? If you're having trouble concentrating that could be adult adhd I recommend: 3 months of a basic counseling and/ or medication might help you prepare for religious approach. It's cheaper that a flight and you'll introspective work towards the same goal. In fact you can do both together. You reached a point where your aware of a limitation. This is valuable so don't think nothing isn't progress. As eager student will you not listen to this reasonable advice? Its not so much I can't focus. I wish to mastet jhana meditation and even held access concentration. Its I cant be reasonable to put the work in it take to master this meditation. I will always find a reason to not do it or quit. But I see your point. I just need a place I have to worry about nothing else other then meditation Find somewhere uncomfortable and mostly private. A bedroom is no good and a garden isn't either. You'll be caught in the view. I tried that.. So much I JUST came back from a trip from hawaii with the aim of mediation and didn't even mediate for 3p mintues I was there for a week.. I can't forcr myself I will find away out Hawaii is somewhere nice. Some students can't meditate. One of my stubborn ones is like this. Walking medication works better. The idea is work while working, sleep while sleeping and bathe while bathing. You sound like you could also use a koan. Would you like one? In fact what does it have to do with a monastery? Don't humans learn by copying (emulating) each other? Isn't easier to do what others are doing, when and because they're doing it? Chrisw you have a valuable point. There's issues with mimicking. One is that someone can appear to have the point and might bring forth some good doing so but unless that individual knows it well enough there isn't going to be the bridge of understanding. It's like pointing at the moon and understanding the fingers. If they aren't catching the point for the point- There's the gap. A way of looking at meditation and enlighten is to shed your artificial reality. How do you learn what you don't? This is good for knowledge. It isn't the same candle as wisdom. teachers teach. A master doesn't. But before the goal (i.e. "understanding"), let's just look at behaviour (dependent condition). The OP wants to regulate his behaviour/practice. I think that non-independent humans are (from an earliest age) highly motivated by society. Perception of society's approval etc is a very strong incentive, I imagine -- "Either you meditate when others do, or you'll have to leave the monastery!" Given that social/peer pressure is so strong, in-bred, I think people use/harness that human tendency, and willingly, in all fields of endeavour: athletes with athletes, students with students. Isn't that so? In Zen there are mechanisms to support and deny this. The relationship of master and student is an individual one. Having a sense of belonging does not one make for a good student. In general teaching this is a fairly usual bridge. It does create limitations of a student. Temples are designed for many different individuals. The idea is one does not disturb others. My roshi said it like this: "one eye, one dimension. Two eyes two dimensions, four eyes four dimensions and so on." The eyes provide some insight. Who gets the gold? Does this person run with others?
STACK_EXCHANGE
Hannah Forbes is currently a PhD Student of Engineering Design and former Developer and Analyst at Sky & Now TV. She holds a Masters in Mechanical Engineering with a concentration in Manufacturing and Management. Read on to hear how she started her career in STEM, where she sees the industry going, and the three things she couldn't live without on Mars... What drove you to pursue a career in engineering? Reflect on 1 - 2 crucial decisions/events that led you down this path. I first considered a career in tech during my final year at University. Through my final year project, I worked with several tech start ups and found these environments to be really collaborative and exciting. This led me to begin learning to code using free online resources. The first time I created a (very simple) website, I felt like a magician and discovered that coding was a new and exciting way for me to be creative. From that point onwards I looked forward to seeing what else I could create and what else I could learn. What top tip would you give a woman looking to start a career in engineering? Don’t be afraid to Google, don’t be afraid to ask questions and try lots of projects! I find that the nature of the developer role allows constantly for the opportunity to pursue new projects and solve new problems. I also think the best way to learn is to find something you want to create and try to build it. What are your favourite resources for building engineering-skills, either for beginners or continued learning? Code Academy is definitely a great gateway into coding because it really shows you that coding is accessible to anyone. For really developing your coding skills, however, I recommend just trying to create something. Go on GitHub and browse open source projects or simply come up with a problem you want to solve and start googling away to see where to start. You’ll be surprised at how quickly you’ll learn! Who is your STEM hero and why? I think it has to be Simone Giertz because she epitomises what being a maker and creator is about and really shows how easy, exciting and fun being an engineer can be. How do you see the world of STEM developing in the next 5 years and what are the key topics you hope to see the industry exploring? The movement within STEM that most fascinates and excites me is the involvement of the arts (officially known as STEAM). It’s the recognition that creativity has many outlets and it’s redefining engineering as a way to express ideas and solve problems outside of perhaps what would be regarded as stereotypical. I’d like to see this movement replicated in industry and I think a more recognised relationship between engineering and the arts would overall make the industry more inclusive. You've been selected to join the 2050 mission to mars! What 3 things that you can't live without would you take? A pack of cards for keeping the gang entertained. Some good quality tea (although maybe this would be deemed an essential? I hope so) and finally, Thinking Fast and Slow by Daniel Kahneman. It’s an incredible book but a heavy read so should keep me busy.
OPCFW_CODE
Hyper-V Program Manager I have seen a number of reviews and comments about the fact that while Hyper-V virtual machines appear to be quite fast once they are up and running - operating system installation seems to take quite a while. The reason for this is relatively easy to explain. With Virtual Server and Virtual PC we only had emulated devices to use - and as a result we spent a lot of time optimizing and tweaking the performance of these emulated devices. When we implemented the emulated devices under Hyper-V we had to remove many of these optimizations due to the entirely different architecture of Hyper-V. We did not, however, spend much time re-optimizing the emulated devices on Hyper-V because we had the new synthetic device architecture where we have focused our attention for performance tuning. This means that Hyper-V emulated devices are slower than Virtual Server / Virtual PC emulated devices - but Hyper-V synthetic devices are much faster than Virtual Server / Virtual PC emulated devices. The catch here is that when you install an operating system you are almost always using our emulated devices - and you do not start using synthetic devices until after you have installed the operating system. So in conclusion: what i would really like to see is a GUEST that is comparable in performance with the actual HOST. Why does the VM has to use a pre-defined/fixed set of hardware drivers ?? why can't the VM use New/Actual device drivers of the hardware available ? Been testing windows 2003 and windows 2008 as hosts with the synthetic devices and it works great :-) Fun to experiment with in my (IT Pro) environment and considering moveing it into the test environment for software developers soon. Tested to run Windows XP as a host os and it was horrible! :-( Simply useless so back to vmware server for the client os. Is there any plans to make xp work as a client? Slow as molasses OS installs are a definite barrier to adoption. Might want to seriously consider that before calling any decission "final". Xepol, could you contact me about your experience with OS installs in Hyper-V? I'd like to talk to you about it. I'm at firstname.lastname@example.org. @MAJawed: because the guest's use of the hardware has to be shared with other guests and the host itself. The native driver for the device would expect to have full control of the hardware. The only way that Virtual PC/Virtual Server/Hyper-V can share the device is by intercepting the commands to the emulated or synthetic device and redirecting them to the host operating system, either in user- or kernel-mode APIs. However, the drivers for the emulated devices are expecting to talk to real hardware. That means they're using instructions and physical memory areas that are banned from use by user-mode programs. Without a hypervisor, the processor raises a hardware exception which Windows turns into a software exception. Virtual PC or Virtual Server can then emulate the requested operation and dismiss the exception, allowing the guest driver to perform the next step. With hardware virtualization and a hypervisor, the processor instead calls the hypervisor directly, a much faster operation than the exception handling. Installing the 'additions' drivers allows the communication between guest and host/hypervisor to be improved, but I believe the device is still emulated. The new 'synthetic' devices have a much closer match to the Windows API so they effectively turn a guest I/O request directly into a host I/O request. This cuts out many of the steps where a high-level request is turned into lower-level requests by the guest, which then has to send many more requests through the exception/hypervisor channel. You can get a 'cleaner' experience by using SCSI devices on the guest, as the SCSI interface is a better match to the OS file system API. I believe you can get an even better experience if the controller for your hard disks implements the SCSI protocol itself (RAID controllers for SATA drives tend to appear as SCSI adapters to Windows). I believe this is the function of the 'storage bus' driver, to inject I/O requests into the OS at a lower level. If you want to improve your disk performance, you can avoid the file system overhead by using a raw physical disk. In Virtual Server 2005, this is done by creating a Linked Virtual Hard Disk. This does mean you need a separate physical hard disk per VM. If you do have a RAID controller it might be able to carve out a separate volume to present to the host OS from one or more physical disks. However, if you're looking at doing this anyway, you should be aware of the physical characteristics of hard disks and how they behave when handling random and sequential I/Os. Basically the observed speed of a hard disk is governed by the disk head seek time, which is the reason that sequential I/O is far faster than random I/O. If you require sequential I/O speeds but you share the physical disk with something else doing random I/O, your sequential I/O performance is destroyed. As to why the OS installation is slow, it's usually the case that the OS has to install using only the drivers available on its boot disc. There isn't much opportunity to load drivers for any devices that weren't known when the OS boot disc was built, and that certainly applies to Hyper-V's synthetic devices. The one place that Windows allows drivers to be added is to load new storage bus drivers, by pressing F6 when Windows 2000/XP/2003 is loading or clicking Load Driver at the 'select volume for installation' prompt in Windows Vista or 2008. For Virtual Server 2005, if the guest OS hard drive is attached to the emulated SCSI controller, there is a virtual floppy you can attach to load the Additions SCSI driver by pressing F6. I would have expected this to be the case for Hyper-V too (note that Additions are now called Integration Components) but I haven't yet installed it. If the speed of guest OS installation from an optical media is a concern, another option is to configure your VM [with emulated NIC] to boot off the network and then do an install via WDS (Windows Deployment Services). That is my preferred primary guest OS install mechanism. You really only need to do an OS install once. After that you can simply copy the installed VM and run sysprep or newSID. Not even an issue. Anyway what sys admin has time to sit and watch an install. The VM install is always waiting for me to come back as I'm doing other things So the upshot is to not bother with Hyper-V for OSes that don't have Integration Components? Best to stick with Virtual Server / VMWare Server for these guests? From what I've seen, it seems that the biggest bottleneck is the CD/DVD access speed. It's been one of my few serious gripes about hyper-V so far - if I'm pointing at an ISO it should be quite fast (OS installs via an ISO are very fast in VMWare). Even in a running guest OS, reading ISOs is unusually slow, and incurs significant guest CPU time.
OPCFW_CODE