Document
stringlengths
395
24.5k
Source
stringclasses
6 values
On my data engineer way … like from song DJ Tiesto – On My Way …. It’s cool, doesn’t it? OK, in this article I am going to show you how to: - set custom firewall rule - create SQL Database logins - set database-level permissions Ok, enough. Let’s go! Creating a SQL Database Go to Azure SQL databases and create WineDB. You need to follow this path and set the instance of database on your server: Home > SQL databases > Create SQL Database After that open Query editor (to open you need to set firewall rules, read next paragraph ) copy, paste and execute my code: https://github.com/aiflops/LocalDBMigToAzure/blob/master/Create-WineDb – to create tables You can see WineDb and tables after quick refresh. Set firewall rules SQL Database Firewall will prevent all connections to SQL Database unless explicitly allowed via a firewall rule. This includes your attempts to connect to SQL Database even using the SQL Database Follow these steps to add your IP address to the server-based firewall rules: - Log in Microsoft Azure at https://portal.azure.com/ - Click SQL Databases in the vertical navigation pane on the left. - Choose WineDb. - Click “Set server firewall” in the horizontal navigation pane on the right. - You can choose Add client IP, it automatically adds your IP address in Firewall settings - In the RULE NAME text box beneath the list of existing rules, type Home Office. Note that the rule name cannot contain either forward slash (/) or backslash (\) characters. In the START IP ADDRESS and END IP ADDRESS boxes, type the IP address range of your home Click Save button Create SQL Database logins First of all we need to copy full Server Name. We can find it in SQL database in the top. Figure below. Then open SSMS, in the dialog window paste Sever Name from Azure, secondly type your root login and password. Once connected, your SQL Database server will be listed in the Object Explorer pane. Right-click on the server name, and choose New Query. This opens a new query window connected to the master database. Type the following T-SQL statement: CREATE LOGIN DbLogin WITH PASSWORD='<Password>’ You have now created a new SQL Database server login; however, this login isn’t authorized to do anything with the SQL Database server. You now need to create a user for this login and then either grant the user server-level or database-level permissions. Creating read only user When using SQL Database, you don’t want to grant all users server-level permissions. If you have an application that only needs to read data from a database, you don’t want to give the user the application is connecting as permissions to write data. In this case, you can create a new user who has read-only database permissions. Right-click the WineCloudDb database, and choose New Query. This opens a new query window connected to the WineCloudDb database. Type the following T-SQL statements into code window connected to WineCloudDb: CREATE USER DbUser FROM LOGIN DbLogin EXEC sp_addrolemember ‘db_datareader’, ‘DbUser’ When you want to connect database again to avoid error you need to type in dialog window DbLogin credentials. Then Options – > Connection Properties -> Connect to WineDB Click the Connect button. SSMS connects successfully You have now connected to the WineCloudDb SQL Database using SSMS with a limited access account that has only db_datareader permissions. As explained, this is useful when you need to give a team member read-only access to a production database to diagnose issues or conduct some analysis on the data. Ok, so in this article you’ve done a great job: Now you know how to: - create database, - insert data, - create user login - add user permission - login on new login to Azure SQL form local machine. We’ve made next step on our Data Engineer ROAD. Thank for your attention! Please put more smile in digital world
OPCFW_CODE
"du -h -a" not show all disk usages When I run df -h, I have the following output, indicate that /dev/sda6 which mounted on / is using 100% disk space. Filesystem Size Used Avail Use% Mounted on /dev/sda6 29G 29G 34M 100% / I navigate to /, I run du -h -a --max-depth=1 | sort -h to see which folder use the most spaces. I got only a result of 2.6 GB diskusage. 2,6G . 1,2G ./usr 573M ./opt 448M ./var 114M ./lib 93M ./root 87M ./home 7,5M ./etc 5,8M ./sbin 5,7M ./bin 4,3M ./lost+found 3,6M ./lib32 8,0K ./dev 0 ./vmlinuz 0 ./tmp 0 ./sys 0 ./srv 0 ./selinux 0 ./proc 0 ./mnt 0 ./media 0 ./lib64 0 ./initrd.img 0 ./forcefsck 0 ./ext 0 ./dead.letter 0 ./boot 0 ./000-default-ssl Running du -h -s -x / as suggested in the comments also shows only 2.6 GB used, where df -h says I have 29GB used for /dev/sda6 which is mounted at /. Where are the rest of the files? Why did you run with --max-depth=1. Are you not interested in the whole file tree? Do du -h -s -x / instead. du -h -s -x / also show 2.6 GB used, where df -h says I have 29GB used for /dev/sda6 which is mounted at / A log file that was deleted but still being filled in? try lsof -n |egrep -w 'DEL|deleted'. See if there's a deleted file still in use that rings a bell. Also what is the filesystem type? @A.B I ran df -T, and it shows xfs under type. I updated my question, i posted the output of lsof -n |egrep -w 'DEL|deleted' well I hoped you'd look at lsof and its meaning. anyway nothing useful in this lsof. (no more idea here) bind mount the fs to another location and then du that. mount --bind / /mnt; du -sh /mnt What filesystem is it? I can mount a certain Btrfs subvolume as / while leaving other subvolumes completely inaccessible in the current directory tree. Still df reports the usage of the whole filesystem.
STACK_EXCHANGE
import numpy as np import math #Stores the number of iterations count = 0 #Function that takes in the initial seed values as well as an array containing the points to be clustered. It then computes the euclidean distance between each point and each seed. It then adds the point to the nearest cluster based on the least euclidean distance. This is done recursively until the same points are assigned to each cluster in consecutive rounds. def k_means(center1,center2,center3,coordinates): cluster_1 = [] name_1 = [] cluster_2 = [] name_2 = [] cluster_3 = [] name_3 = [] center_1 = center1 center_2 = center2 center_3 = center3 global count for x in range(8): eDistance1 = math.dist(center_1,coordinates[x]) eDistance2 = math.dist(center_2,coordinates[x]) eDistance3 = math.dist(center_3,coordinates[x]) min_dist = min(eDistance1,eDistance2,eDistance3) if min_dist == eDistance1: cluster_1.append(coordinates[x]) name_1.append(x+1) if min_dist == eDistance2: cluster_2.append(coordinates[x]) name_2.append(x+1) if min_dist == eDistance3: cluster_3.append(coordinates[x]) name_3.append(x+1) #Computing the mean value of each cluster to get the new centoid c1 = list(np.mean(cluster_1, axis = 0)) c2 = list(np.mean(cluster_2, axis = 0)) c3 = list(np.mean(cluster_3, axis = 0)) f = open("output.txt", "a") #Checking if the current centroid is equal to the new centroid.This is the convergence point of the algorithm if c1 == center_1 and c2 == center_2 and c3 == center_3: f.write("The total number of iterations is " + str(count)) else: print(center_1) print(center_2) print(center_3) f.write("Iteration " + str(count) + "\n\n") count = count + 1 f.write("Cluster 1: " + str(name_1).replace('[','').replace(']','') +"\n") f.write("Centroid: " + str(tuple(center_1)) + "\n\n") f.write("Cluster 2: " + str(name_2).replace('[','').replace(']','') +"\n") f.write("Centroid: " + str(tuple(center_2)) + "\n\n") f.write("Cluster 3: " + str(name_3).replace('[','').replace(']','') +"\n") f.write("Centroid: " + str(tuple(center_3)) + "\n\n") center_1 = c1 center_2 = c2 center_3 = c3 f.close() k_means(center_1,center_2,center_3,coordinates) print(count) coordinates = [[2,10],[2,5],[8,4],[5,8],[7,5],[6,4],[1,2],[4,9]] #When running K-means, set the initial seeds (initial centroid of each cluster) as examples 1, 4 and 7. c_1 = [2,10] c_2 = [5,8] c_3 = [1,2] #Call to the function k_means that takes in the initial seed values and creates 3 clusters for the points given k_means(c_1,c_2,c_3,coordinates)
STACK_EDU
It’s official. Facebook are continuing their efforts to make the distinction between Workplace and consumer Facebook more obvious. This is clearly a good thing. When I talk to potential customers, most are very enthusiastic but about 10% of people say, “I don’t trust Facebook”. I immediately start talking about Workplace’s ISO certifications, how the cost model for Workplace is totally different to Facebook, how you are the Data Owner and how all the big companies have had their brushes with data privacy controversy. It’s a well practiced speech (and I truly believe all of it) but for most of that 10%, the die is already cast and nothing I say will change their view that Facebook is just after their work data as well as their personal. So this is an attempt by Facebook to create further distinction between the brands and platforms. For us, we need to think about how it will impact our Workplace instance and so I guinea pigged my Coolr colleagues and migrated us at the first opportunity. I broke our things so that you didn’t have to. And the good news is….. everything just worked. Honestly, I had planned on selling a “Coolr migration preparation” package but it’s hard to sell a support package when the underlying process is so smooth and simple. Well done Facebook – grrrrrrrr. Let’s have a quick look at what’s involved. If you’re an admin of your Workplace instance, you’ve probably already seen this notification: This was added to all Workplace instances on March 25th. The “Learn More” button takes you to an excellent FAQ, while the “Admin Panel” button takes you to the preferences subsection of the admin pages: Which now has a new, big toggle at the top. Flick that to “yes”, hit save…and you’re done! Now, your users will see this notification the next time they access Workplace: And will be automatically redirected to https://yourCompany.workplace.com. No need to re-sign in (admins do need to log in again the next time they access the admin panel), no scary alerts or errors. I’ve tested this in both SSO and “Password” (Facebook’s login mechanism) environments and they both work equally well. Facebook have even taken the time to setup redirects so that anyone accessing https://yourCompany.facebook.com is redirected to the correct URL. Facebook have advised us that these redirects will be in place “for around six months” which will give you time to talk to your people and get all of those bookmarks and documents updated. It’s also worth noting that Facebook have said companies will automatically be migrated after six months. Clearly it’s better to migrate now and have the redirects in place, than wait six months and be automatically migrated just as the redirects are being removed. Finally, if you do hit any issues, Facebook have been kind enough to give us a backout (something rather rare in this Cloud focused world we now operate in). Once enabled, the toggle in the admin panel remains – so you can flick it back to “off”. Your users will get a similar notification and the redirects are even reversed, so https://yourCompany.workplace.com will be redirected to https://yourCompany.facebook.com: All of the generic URLs work under both domains: https://graph.facebook.com = https://graph.workplace.com https://www.facebook.com/scim = https://www.workplace.com/scim https://work.facebook.com = https://work.workplace.com And the experience is equally smooth on mobile and desktop. In fact, the only problem I’ve encountered so far is a cyber squatter who’s registered worklace.com. I stumbled over it when I fat-fingered the URL during testing. Currently there isn’t anything at that address, but it wouldn’t be hard for a malicious actor to activate it and cause some mischief. One thing to note is that Facebook have said that you don’t need to change any SSO config “at the moment”. The inference being that you may need to in future. I’ve asked for further clarification on this point and will update this blog once I have it. So the migration is easy, quick and painless for your users. Kudos Facebook! If you would like any support, feel free to reach out to me, email@example.com, but if I were you I wouldn’t hesitate to give it a go yourself. Update 01/04/19: WhenWorkplace sends you an email, for example “Joe Bloggs just posted something really interesting – go have a look https://yourCompany.facebook.com” the email is still pointing to *.facebook.com post-migration
OPCFW_CODE
M: OSv, a new open-source operating system for virtual machines - pwg http://mailman.cs.huji.ac.il/pipermail/linux-il/2013-September/010649.html R: Meai It says it's optimized for single executing applications and "cloud workloads". Well, isn't there always going to be at least monitoring software running on my server? So that would be 2 applications already. R: auggierose I guess you would monitor the virtual machine from the outside, not from within. R: rlpb It sounds like this just reinvents the "process". We already have virtualisation of processes; that's what OSes do already. So this just sounds like a minimal Linux. What am I missing? R: jacobquick "Another refreshing feature of OSv is that is written in C++. It's been 40 years since Unix was (re)written in C, and the time has come for something better." can't stop laughing R: olsonjeffery Has there been anything like this, aside from Singularity OS, in recent years? The license, vis-a-vis Singularity, is obviously much more appealing. Definitely something to keep an eye on. R: pjmlp There is Drawbridge, [http://research.microsoft.com/en- us/projects/drawbridge/](http://research.microsoft.com/en- us/projects/drawbridge/), which follows the idea of having the whole OS as a library with a pico hypervisor. Then you have all the language runtimes that run directly on the hardware, like Erlang on Xen, MirageOS and so on. R: ogrisel Assuming host OS is Linux, I wonder what are the benefit of "KVM + OSv + my_app" vs "LXC + my_app" (e.g. via docker.io).
HACKER_NEWS
Vista SP1 updates send some PCs into endless reboot - 20 February, 2008 08:46 Updates that Microsoft began feeding Windows Vista users last week to prep PCs for next month's release of Service Pack 1 (SP1) have crippled some machines, according to messages posted to the company's support site. Microsoft said it is investigating the reports. Last week, Microsoft started sending Vista users two final prerequisite updates that are required before SP1 can be installed in March. The updates to the operating system's install components were delivered via Windows Updates, which automatically downloaded and installed them on the majority of Vista machines. Users quickly started squawking. In most cases, they reported that the final update hung while displaying the message "Configuring Updates Step 3 of 3 -- 0% Complete," which was followed by a reboot of the PC. Which was followed by another reboot, and another. "[It] reboots ad infinitum," said Frank Melk on the Microsoft support newsgroup. A smaller number reported a different problem: After the update, their computer refused to boot normally. Trying to boot into Safe Mode did no good, users said; the reboot loop cranked up then as well. "I am unsure as to what to do, because entering Safe Mode gives the same screen," Melk said. "Furthermore, I have no restore points saved, so going back to a known previous good config is no good either!" Melk's mention of restore points referred to Windows Vista's System Restore, a tool that periodically takes a "snapshot" of the PC. Also called restore points, they can be called up to return the machine to its condition at the time the snapshot was taken. Some users who posted messages to the same newsgroup said that they had managed to regain control of the computer by booting from their Vista install DVD and selecting the "Restore from a previous restore point" option. "The first two restore points available to me failed," noted another user, pegged as phazedoubt. "I had to go back three days before I found one that worked." Others said they had been in touch with Microsoft support representatives -- the company offers free support to consumers on all update issues through a toll-free number or e-mail -- and claimed that they had been told to boot from their Vista media and choose "Run Startup Repair." "Apparently, so Microsoft says, my machine was restarted thinking it had downloaded an update, but really the update hadn't been downloaded," said user bicksbah on the support newsgroup. "So, upon reboot, it couldn't find the update and Vista kept trying to install it endlessly." Microsoft was aware of the problem by late Friday, when someone identified as Darrell Gorter, who claimed to be with the company, asked users to send him log files "to determine a cause for the issue." On Monday, however, a company spokeswoman had little more to offer. "We are currently looking into this but have no additional information to share at this time," she said. "We apologize for any inconvenience this may be causing our users." Messages left on support newsgroups, in fact, show that the problem was first reported by users in December when Microsoft offered a Vista SP1 release candidate to the general public, but required them to download and install the prerequisite updates manually, one after the other. The complaint volume picked up last month when Microsoft opened a second, more finished, build to all comers. Some users remained frantic because they did not have a Vista install or recovery disc; computer makers often forgo such niceties, instead putting the recovery files on the PC's hard drive. Others took Microsoft to the woodshed. "What blows me away is that Microsoft has not posted anything on the site yet, at least nothing I could find," said redwinger in a message to the newsgroup on Sunday. "They should at least say, 'We know we have an issue and we are looking into it.' I bought the laptop with the OS loaded and I don't have the recovery disk. How screwed am I?" Although it's impossible to gauge the extent of the prerequisite reboot problem from the support forums, the traffic on the Windows Vista Service Pack 1 (SP1) newsgroup is substantial: the thread to which Melk, Gorter and phazedoubt posted included nearly 80 messages by midday Sunday, and had been viewed more than 23,000 times. Two weeks ago, Microsoft announced it had completed SP1, and was sending it to resellers for installation on new computers, and to duplication to prep retail copies. At the same time, however, it said it would not make it available to most users until mid-March, and would not deliver it automatically via Windows Update until April.
OPCFW_CODE
Advised read: Getting started with Platypush (Medium article). The wiki also contains many resources on getting started. Extensive documentation for all the available integrations and messages is available on ReadTheDocs. Also check other Medium stories to get more insights on what you can build with it and inspiration about possible usages. Imagine Platypush as some kind of IFTTT on steroids - or Tasker, or Microsoft Flow, or PushBullet on steroids. Platypush aims to turn any device in a smart hub that can control things, interact with cloud services and send messages to other devices. It's a general-purpose lightweight platform to process any request and run any logic triggered by custom events. Imagine the ability of running any task you like, or automate any routine you like, on any of your devices. And the flexibility of executing actions through a cloud service, with the power of running them from your laptop, Raspberry Pi, smart home device or smartphone. You can use Platypush to do things like: - Control your smart home lights - Control your favourite music player - Interact with your voice assistant - Get events from your Google or Facebook calendars - Read data from your sensors and trigger custom events whenever they go above or below some custom thresholds - Control the motors of your robot - Send automated emails - Synchronize the clipboards on your devices - Control your smart switches - Implement custom text-to-speech commands - Build any kind of interaction with your Android device using Tasker - Play local videos, YouTube videos and torrent links - Get weather forecast for your location - Build your own web dashboard with calendar, weather, news and music controls (basically, anything that has a Platypush web widget) - ...and much more (basically, anything that comes with a Platypush plugin) Imagine the ability of executing all the actions above through messages delivered through: - A web interface - A JSON-RPC API - Raw TCP messages - Web sockets - ...amd much more (basically, anything that comes with a Platypush backend) Imagine the ability of building custom event hooks to automatically trigger any actions: - When your voice assistant recognizes some text - When you start playing a new song - When a new event is added to your calendar - When a new article is published on your favourite feed - When the weather conditions change - When your press a Flic button with a certain pattern - When you receive a new push on your Pushbullet account - When your GPS signal enters a certain area - Whenever a new MIDI event is received (yes, you heard well :) ) - Whenever a sensor sends new data - At a specific date or time - ...and so on (basically, anything can send events that can be used to build hooks) Imagine the ability of running the application, with lots of those bundled features, on any device that can comes with Python (only compatible with version 3.6 and higher). Platypush has been designed with performance in mind, it's been heavily tested on slower devices like Raspberry Pis, and it can run the web server features, multiple backends and plugins quite well even on a Raspberry Pi Zero - it's even been tested with some quite impressive performance on an older Nokia N900, and of course you can run it on any laptop, desktop, server environment. It's been developed mainly with IoT in mind (and some of its features overlap with IoT frameworks like Mozilla IoT and Android Things), but nothing prevents you from automating any task on any device and environment. To get started: Release history Release notifications | RSS feed Download the file for your platform. If you're not sure which to choose, learn more about installing packages. |Filename, size||File type||Python version||Upload date||Hashes| |Filename, size platypush-0.13.9.tar.gz (752.9 kB)||File type Source||Python version None||Upload date||Hashes View|
OPCFW_CODE
Reverse proxy a Wordpress multi-site under a folder in another Wordpress multi-site We have 2 Wordpress multisite installations in 2 different servers and we are trying to have the testing environment accessed as a subfolder under the production environment. So if the production site is www.production.com, we want the testing site to be www.production.com/testing. We have looked at countless example to accomplish this, and we have made some progress, but the main difference is that we have a multisite on top of another multisite. We have configured the servers as it is shown below and we got the main page for testing site to display under www.production.com/testing, we also got a sub-site to load under www.produciton.com/testing/subsite1/. Here are the issues. When we go to www.production.com/testing/wp-admin/, we end up with www.production.com/wp-login.php?return_url=xx... If we add testing to this url www.production.com/testing/wp-login.php?return_url=xx.. , URL does not change but we can see that the login page came from production.com. Also it appears the main content of pages in sub-sites is correct but all the images and styles seem to be coming from the production site. Here is where we are so far. Production Server We’ve put in following changes to the nginx Production.conf under sites-enabled: rewrite ^/testing/(.*)$ /$1 break; proxy_pass https://www.testing.com/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; } if (!-e $request_filename) { rewrite /testing/wp-admin$ $scheme://$host$uri/ permanent; rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^/testing (/wp-((?!json).)*)$ /testing$1 last; rewrite ^/testing (/.*\.php.*)$ /testing$1 last; rewrite ^/[_0-9a-zA-Z-]+(/wp-((?!json).)*)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location @prod_serv { if (!-e $request_filename){ rewrite "^/ testing /wp-content/uploads/(.*)$" "https://www.testing.com/wp-content/uploads/$1" redirect; rewrite "^/ testing /(.*)/wp-content/uploads/(.*)$" "https://www.testing.com/wp-content/uploads/$2" redirect; rewrite "^(.*)/wp-content/uploads/(.*)$" "https://www.production.com/wp-content/uploads/$2" redirect; } } Testing server No changes to the nginx configuration file on testing side. Following are tables updated in the testing side. wp_options table option Name | Option Value -------------------------------------------- Siteurl | (ProductionDomain)/testing home | (TestingDomain) Wp_4_options Option_id | Option name | Option value ------------------------------------------------ 1 | siteurl |(ProductionDomain)/testing/subsite1 2 | home |(TestingDomain)/subsite1 Any advice from anyone would be greatly appreciated.
STACK_EXCHANGE
[MacPorts] ProblemHotlist modified noreply at macports.org Fri Apr 6 01:57:54 UTC 2018 Page "ProblemHotlist" was changed by raimue Diff URL: <https://trac.macports.org/wiki/ProblemHotlist?action=diff&version=139> Comment: Such problems are detected and fixed by rev-upgrade --- ProblemHotlist (version: 138) +++ ProblemHotlist (version: 139) @@ -40,31 +40,6 @@ indicates that your `install_name_tool` or `strip` commands (both are part of the Xcode command line tools) are too old to deal with the types of objects produced by the compiler. (The number after "unknown load command" may be different.) This can happen if you have forgotten to [https://guide.macports.org/chunked/installing.xcode.html update the Xcode command line tools] to a version designed for your version of OS X. -== Build failures after upgrading to ncurses 6 == #ncurses6 -After upgrading ncurses to version 6, a port may fail to build, with the following error message in the main.log or config.log file: -dyld: Library not loaded: /opt/local/lib/libncurses.5.dylib - Referenced from: /opt/local/lib/libreadline.6.dylib - Reason: no suitable image found. Did find: - /usr/lib/libncurses.5.dylib: no matching architecture in universal wrapper -The solution is to upgrade readline: -sudo port upgrade readline -Then clean the port that originally failed to build, and try to build it again. -The problem occurs when building ports that use an autotools-based configure script, after having upgraded ncurses to version 6 but before having upgraded readline to use ncurses 6. The reason the problem occurs is that part of the boilerplate that autotools bakes into every configure script is to locate an awk implementation. The first one it checks for is gawk, so if the gawk port happens to be installed, a configure script will try to use that. gawk depends on readline, which depends on ncurses, and if you have upgraded ncurses but haven't upgraded readline to use the new ncurses, then gawk will be broken. -MacPorts usually upgrades dependencies first, so you wouldn't see this problem if gawk were listed as a dependency of the port that failed to build. But we don't want to add unnecessary gawk dependencies to thousands of configure-based ports when the awk implementation that's part of OS X would work just as well. == Incompatible library version: X requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 == #libiconv-version You might see this error when running a program, or in the main.log or config.log when a port fails to configure or build: Page URL: <https://trac.macports.org/wiki/ProblemHotlist> Ports system for macOS This is an automated message. Someone added your email address to be notified of changes on 'ProblemHotlist' page. If it was not you, please report to admin at macports.org. More information about the macports-changes
OPCFW_CODE
An opinionated starter kit for Obyte, with Quasar. Status: scaffolding, not ready for use Mere inspiration make not the app alone - as they with great ideas may not necessarily great developers be. Especially within the domain of applied cryptography and virtual currency, novice’s mistakes are more likely than beginner’s luck. FirstByte minimizes these risks by enabling those with moderate skill (or a willingness to learn) to produce shippable best-practice apps for all major computing platforms from one code base and in record time. Tailored to meet the needs of both Obyte enthusiasts and professional developers, the core of FirstByte is built on the Quasar Framework and obyte.js - and is distributed under the permissive MIT license. Inheritance is a powerful pattern, and this project will leverage it. FirstByte in and of itself is a lerna-based monorepo hosted at a public git repository where all of the development and CI pipelines are managed. When it is viable, it will be available as a WIP via NPM and YARN as well as from IPFS and DAT. Quasar delivers a number of things “from the core” that are technically available in all starters. - quasar-cli for best in class development - All ES6 language features available - ESLint in Standard style - the innate ability to build SPA, PWA, SSR, Cordova and Electron apps The Pure Flavour The Pure flavour will be a minimal setup that extends the basic Quasar core with: - a collection of maintenance scripts for development, publication and distribution - multiple test-runners, including Jest, Cypress and Webdriver-based E2E with sample tests - ESLint extended with A11Y “lifting” - Gitlab pipelines and dockerfiles - GraphQL / Apollo / Prisma API and DB - the obyte.js library - integration with now.js for immediate project delivery - automated documentation of APIs and functions - robust system configuration and secret management using ENV variables - developer configurable colors, icons, background images, graphs and animations - i18n language translation engine rigged for use with content, not just interface In addition to that which is provided by Obyte, Quasar, vue.js and obyte.js, the FirstByte pure starter will describe in detail every component, plugin and script. The factors that make each flavour distinct will be described in minutia. Partially generated from JSDoc comments, and partially handwritten by the authors, the approach to be maintained is a “living document”, most likely to be built with Storybook.js. Further, the entire documentation will be i18n based and translated using the Utopian.io / Davinci service. This project uses yarn, a modern version of node (10 at the time of this writing) and lerna for monorepo management. It is linted with ESLint and designed for Quasar 1.0 in mind, although it may be possible to use with legacy Quasar. The project uses yarn. Refer to its documentation to install it. yarn global add lerna Install dependencies of all the packages This will bootstrap all the projects of the monorepo. If you only want to install a specific project, open the project folder and follow the instructions in the README.md. lerna link lerna bootstrap MIT - Copyright 2018 Daniel Thompson-Yvetot and Razvan Stoenescu FirstByte Logo & Wordmark - CC-ND-NC Daniel Thompson-Yvetot
OPCFW_CODE
# -*- coding: utf-8 -*- # -*- test-case-name: pytils.test.templatetags.test_dt -*- """ pytils.dt templatetags for Django web-framework """ import time from django import conf, template, utils from pytils import dt from pytils.templatetags import init_defaults register = template.Library() #: Django template tag/filter registrator debug = conf.settings.DEBUG #: Debug mode (sets in Django project's settings) show_value = getattr(conf.settings, 'PYTILS_SHOW_VALUES_ON_ERROR', False) #: Show values on errors (sets in Django project's settings) default_value, default_uvalue = init_defaults(debug, show_value) # -- filters -- def distance_of_time(from_time, accuracy=1): """ Display distance of time from current time. Parameter is an accuracy level (deafult is 1). Value must be numeral (i.e. time.time() result) or datetime.datetime (i.e. datetime.datetime.now() result). Examples:: {{ some_time|distance_of_time }} {{ some_dtime|distance_of_time:2 }} """ try: to_time = None if conf.settings.USE_TZ: to_time = utils.timezone.now() res = dt.distance_of_time_in_words(from_time, accuracy, to_time) except Exception as err: # because filter must die silently try: default_distance = "%s seconds" % str(int(time.time() - from_time)) except Exception: default_distance = "" res = default_value % {'error': err, 'value': default_distance} return res def ru_strftime(date, format="%d.%m.%Y", inflected_day=False, preposition=False): """ Russian strftime, formats date with given format. Value is a date (supports datetime.date and datetime.datetime), parameter is a format (string). For explainings about format, see documentation for original strftime: http://docs.python.org/lib/module-time.html Examples:: {{ some_date|ru_strftime:"%d %B %Y, %A" }} """ try: res = dt.ru_strftime(format, date, inflected=True, inflected_day=inflected_day, preposition=preposition) except Exception as err: # because filter must die silently try: default_date = date.strftime(format) except Exception: default_date = str(date) res = default_value % {'error': err, 'value': default_date} return res def ru_strftime_inflected(date, format="%d.%m.%Y"): """ Russian strftime with inflected day, formats date with given format (similar to ru_strftime), also inflects day in proper form. Examples:: {{ some_date|ru_strftime_inflected:"in %A (%d %B %Y)" """ return ru_strftime(date, format, inflected_day=True) def ru_strftime_preposition(date, format="%d.%m.%Y"): """ Russian strftime with inflected day and correct preposition, formats date with given format (similar to ru_strftime), also inflects day in proper form and inserts correct preposition. Examples:: {{ some_date|ru_strftime_prepoisiton:"%A (%d %B %Y)" """ return ru_strftime(date, format, preposition=True) # -- register filters register.filter('distance_of_time', distance_of_time) register.filter('ru_strftime', ru_strftime) register.filter('ru_strftime_inflected', ru_strftime_inflected) register.filter('ru_strftime_preposition', ru_strftime_preposition)
STACK_EDU
import json import operator import pathlib import pickle import fire import numpy as np import requests from tqdm import tqdm # from openpose_new_data import pose_from_openpose def pose_from_openpose(file_paths, score_thres=0.1): all_points = [] for file_path in file_paths: points_for_frame = [] with open(file_path) as json_file: content = json.load(json_file) people = content['people'] if len(people) != 1: continue assert len(people) == 1, "{} has more than 1 person".format(file_path) keypoints = people[0]['pose_keypoints_2d'] for i in range(25): starting = i * 3 x, y, score = keypoints[starting: starting+3] if score < score_thres: x = y = 0. point = [np.round(x, 3), np.round(y, 3)] points_for_frame.append(point) all_points.append(points_for_frame) return all_points def vote_majority(results, score_thresh=0.5): """[summary] Arguments: results {list of dict} -- Each dict is a prediction from the web API score_thres {float} -- Only score higher than this will be counted. Returns: list of (cls_name, score) ordered by score """ labels = results[0]['labels'] top1 = [r['labels'][np.argmax(r['scores'])] for r in results if np.max(r['scores']) >= score_thresh] print("Short predictions:", ",".join(top1)) ordered_labels = sorted([(L, top1.count(L)) for L in labels], key=operator.itemgetter(1), reverse=True) if ordered_labels: return ordered_labels else: return ['None', 0.] def vote_mean_score(results): labels = results[0]['labels'] scores = np.array(list(map(operator.itemgetter('scores'), results))) # print(scores.shape) mean_score = np.mean(scores, axis=0) ordered_labels = sorted(zip(labels, mean_score), key=operator.itemgetter(1), reverse=True) return ordered_labels def multi_predict_clip(clip_json_dir, span=32, stride=16, ddnet_host='http://localhost:5000'): all_json_list = sorted(map(str, pathlib.Path(clip_json_dir).rglob("*.json"))) X = np.array(pose_from_openpose(all_json_list)) print(X.shape) results = [] for start in range(0, X.shape[0], stride): p = X[start:start+span, :, :] r = requests.post(ddnet_host, json=p.tolist()) assert r.ok result = r.json() results.append(result) return results def infer_one_clip(clip_json_dir, voting='majority', *args, **kwargs): results = multi_predict_clip(clip_json_dir, *args, **kwargs) if voting == 'majority': return vote_majority(results) elif voting == 'mean': return vote_mean_score(results) else: raise ValueError(voting) def eval_one_class(top_dir, target_class, *args, **kwargs): total = 0 correct = 0 for clip_json_dir in tqdm([p for p in pathlib.Path(top_dir).glob("*") if p.is_dir()]): print(clip_json_dir.name) total += 1 prediction = infer_one_clip(clip_json_dir, *args, **kwargs) pred_cls, pred_score = prediction[0] correct += int(pred_cls == target_class) print(clip_json_dir.name, pred_cls, pred_score, pred_cls==target_class) print("Total", total, "Correct", correct, "Accuracy", correct/total) def eval_one_class_exist(top_dir, target_class, *args, **kwargs): total = 0 exist = 0 for clip_json_dir in tqdm([p for p in pathlib.Path(top_dir).glob("*") if p.is_dir()]): total += 1 results = multi_predict_clip(clip_json_dir) ordered_labels = vote_majority(results) ordered_labels = dict(ordered_labels) good = ordered_labels.get(target_class, 0) > 0 exist += int(good) print(clip_json_dir.name, good) print("Total", total, "Exist", exist, "Exist rate", exist/total) if __name__ == "__main__": fire.Fire()
STACK_EDU
Personalized Community is here! Quickly customize your community to find the content you seek. Latest TechTalk Videos Have questions on moving to the cloud? Visit the Dynamics 365 Migration Community today! Microsoft’s extensive network of Dynamics AX and Dynamics CRM experts can help. 2022 Release Wave 2Check out the latest updates and new features of Dynamics 365 released from October 2022 through March 2023 The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence. FastTrack Community | FastTrack Program | Finance and Operations TechTalks | Customer Engagement TechTalks | Upcoming TechTalks | All TechTalks This is a post I’ve been waiting to write for quite a while – but it had to wait until R3 became available. To be able to measure the impact of any changes I build a very simple test harness to exercise the SysExtension framework. A two level deep class hierarchy and one attribute that decorated the sub-class. I then compiled everything to IL, and wrote a small job to measure how many class instances I could spin up per second. This velocity measurement was around 3,400 classes/second. Debugging through the code I quickly learned that a lot of things were going on. This included creating a key for the attribute for various caches. This key was constructed via reflection on the attribute class. I avoided using reflection by introducing a new interface (SysExtensionIAttribute) and I fixed a number of other minor issues. Now the velocity jumped to 40,000 classes/second. Is this an acceptable velocity? Well, how fast can it possibly be? The logic is in essence just creating a class via reflection, so I did a measurement of DictClass.MakeObject(). This could give me 84,000 classes/second. Slightly about double of the my current implementation. After some investigation I discovered two expensive calls: “new DictClass()” and “dictClass.makeObject()”. Can you spot what they have in common? They both requires a call into the native AOS libraries. In other words an interop call. I tried various other calls into the AOS, such as “TTSBegin” (only the first is hitting the DB), and “CustParameters::Find()” (again, only the first one is hitting the DB). To my surprise the velocity of these calls where comparative to DictClass.MakeObject(). The interop overhead outweighs what the method is actually doing. In other words, there is a limit to how many native AOS methods you can call per second. Let us call this velocity: Speed-of-sound. Being a bit intrigued I measured the fastest and rawest possible implementation: “new MyClass()”. This would run strictly in IL, no overhead of any kind, the result was a whooping 23,800,000 classes/second. Let us call this velocity: Speed-of-light. In the words of Barney Stinton: “Challenge accepted!” To achieve this kind of velocity the code must run 100% as IL. No calls into native AOS code. Period. Naturally there are APIs in .NET allowing for dynamically creation of class instances – they are slower than a direct instantiation, but still much faster than calling native AOS code. One other challenge was that SysExtension also can execute as pCode, and then a call into IL would cause a similar slow interop – just in the opposite direction. After a few iterations I had an implementation that would not cause an interop calls, regardless of if the code runs as IL or pCode. Take a look at SysExtensionAppClassFactory.getClassFromSysExtAttribute in R3 for details. I was pleased with the velocity: 661,000 classes/second. Or about 200 times faster than R2. Or about 15 times faster than a call to CustParameters::Find(). Problem solved: The SysExtension framework no longer has performance issues. For a long time we have been hunting for SQL and RPC calls when looking for performance. RPC calls are expensive, as communication between two components (Client and Server) occurs. Just like SQL calls are expensive as the Server communicates with SQL (and waits for the reply). We still need to hunt for unnecessary RPC and SQL calls! Nothing changed; except that a third culprit has been identified: Native AOS calls. Relatively speaking the cost of calls into native AOS code is insignificant when compared to RPC or SQL calls – in the absence of these, the impact is measurable and significant. This is an ERP system, so there will always be SQL calls. So why be concerned with the performance of X++ code? Well, if you can minimize the time between SQL calls, then you will also limit the time SQL holds locks, and you will experience better overall performance and scalability. After all, do you want you code to run with the speed-of-sound or speed-of-light? Update 11-05-2014: Here is the test harness I used: PrivateProject_SysExpProject.xpo Business Applications communities
OPCFW_CODE
2021 Research Review / DAY 2 Towards Incremental and Compositionally Verifiable Security for CHIC-Centric Cyber Physical Systems DoD cyber-physical Systems (CPS) employ commodity heterogeneous interconnected computing (CHIC) platforms and associated software stacks (e.g., ARM/Linux) to deliver capabilities at the speed of relevance [Osborn 2020; Krazit 2019; Keller 2019; Villarreal 2019]. However, the DoD faces a challenge achieving assurance in CHIC-centric CPS implementation security, because such systems employ multiple hardware platforms and multiple, large, layered software. What’s more, these systems are frequently produced by disparate developers. A recent U.S. Government Accountability Office (GAO) report highlights security issues in CHIC-centric CPS implementations [GAO 2018]. Our solution focuses on development-compatible, implementation-level, protected, and verifiable execution building blocks that retrofit with existing code, incrementally, at a fine granularity, with composability across multiple CHIC stack implementation layers. In this project, we draw from our published broad vision and strategy [Vasudevan 2020]. We explore the viability of provable, cost-effective, and innocuous (applicable on existing software and preserve existing functionality, such as NASA innocuity) CHIC-centric CPS implementation security [Halloway 2019]. Our solution focuses on development-compatible, implementation-level, protected, and verifiable execution building blocks that retrofit with existing code, incrementally, at a fine granularity, with composability across multiple CHIC stack implementation layers. Our scope in this project is the design, implementation, and verification of a critical execution path for CPS: secure on-platform sensor access that protects the integrity of the existing CPS application and sensor hardware/driver with trusted control and the data path between them. There are three high-level pieces to our approach (see Figure 1): - Interface confined implementation-level object abstractions (überobjects or üobjects): implementation-level building blocks that form fine-grained monitors around a system-level resource (e.g., data memory and I/O area) towards a security property - Runtime protected set of üobjects (üobject collections): a set of üobjects within a given address space at runtime, bootstrapped by a platform root-of-trust entity that endows memory protection and secure call routings - An implementation-level assume-guarantee reasoning framework that allow us to formally reason about interleaved executions of üobjects in the presence of unverified (and unavoidable) legacy components [Vasudevan 2016] Among the planned outputs of this project is a demonstration of our approach on an off-the-shelf rover CPS platform with secure sensor access protection via üobjects that allows immunity against an entire class of memory integrity attacks. This will serve to showcase the viability of our approach to DoD and DoD industrial establishments. We will also make open source our associated prototype artifacts, code, and documentation (e.g., release via GitHub). This will enable DoD and DoD industrial establishments to start experimenting with üobjects within relevant application domains. This FY2021 project - aligns with the CMU SEI technical objective to bring capabilities that make new missions possible or improve the likelihood of success of existing ones - aligns with the CMU SEI technical objective to Be Trustworthy in construction and implementation, and resilient in the face of operational uncertainties including known and yet unseen adversary capabilities Mentioned in this Article United States Government Accountability Office. Weapon Systems Cybersecurity: DoD Just Beginning to Grapple with Scale of Vulnerabilities. October 2018. https://www.gao.gov/assets/700/694913.pdf Halloway, Michael C. Understanding the Overarching properties. NASA/TM–2019–220292. National Aeronautics and Space Administration. July 1, 2019. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20190029284.pdf Keller, John. Navy chooses Open Architecture Shipboard computers and Self Defense Systems. Military Aerospace. January 16, 2019. https://www.militaryaerospace.com/computers/article/16722033/navy-chooses-openarchitecture-watercooled-shipboard-computers-from-gts-for-sewip-and-self-defense-systems Krazit, Tom. How the US Airforce deployed Kubernetes and Istio on an F-16 in 45 days. The New Stack. December 24, 2019. https://thenewstack.io/how-the-u-s-air-force-deployed-kubernetes-and-istio-on-an-f-16-in-45-days/ Osborn, Kris. New Air Force B-21 stealth bomber takes key technology step toward war readiness. Fox News. June 2, 2020 https://www.foxnews.com/tech/new-air-force-b-21-stealth-bomber-takes-key-technology-step-toward-war-readiness Vasudevan, Amit ; Chaki, Sagar; Maniatis, Petros; Jia, Limin; & Datta, Anupam. überSpark: Enforcing Verifiable Object Abstractions for Automated Compositional Security Analysis of a Hypervisor. Pages 87-104. In proceedings of USENIX Security Symposium. Austin, Texas. August 2016. Vasudevan, Amit; Maniatis, Petros; & Martins, Ruben. überSpark: Practical, Provable, End-to-End Guarantees on Commodity Heterogenous Interconnected Computing Platforms. ACM SIGOPS Operating Systems Review Journal – Special Issue on Formal Methods & Verification. Volume 54. Number 1. July 2020. Pages 8-22. https://doi.org/10.1145/3421473.3421476 Villarreal, Jennifer. GE Aviation and Auterion provide All-in-one hardware and software platform for commercial drones. GE Aviation. https://www.geaviation.com/press-release/systems/ge-aviation-and-auterion-team-provide-all-one-hardware-and-software-platform
OPCFW_CODE
Does time-boxing work? Like many things, there's no clear-cut "yes" or "no" answer here. It all depends on how you apply it. It may work for some, while not for others. It works for me. First, let's back up a few steps. What is Time-boxing? The term "time-boxing" basically is the practice of allocating a "box" of time to something, and sticking to that without fail. The idea is to stick to this as if it's the law, and there would be serious penalties if broken (sometimes this is actually the case!). Who needs it, and why? Anyone with time management issues can possibly make use of time-boxing. It's not just about that though. It also has to do with managing what you do with that time, and accepting the fact that sacrificing things sometimes is ultimately the answer. To relate this to learning Japanese, I'm sure a lot of people are wondering exactly how I manage to fit it into my daily schedule. For starters, I have a full-time job, at 40 hours a week (sometimes more), with an hour drive each way to and from work. I also have a house to take care of (repairs, maintenance, cleaning, cooking, etc.), which is a daily effort. I also have animals who require a metric This begs the question - how do I fit Japanese in? Quite simply, I allocate time for it every day. Specifically, I give it an hour a day for Kanji studies alone. And every day, my study time is exactly an hour, no more, no less. The key is not to set goals in terms of "I have to get x, y and z done by...", but more along the lines of "I'm going to work at this for 30 minutes to put a dent in it, and come back to it later". Simply put, It'll be there tomorrow, and the next day. Time-boxing is applying this concept to everything (within reason) in life. At work, at home, etc. For example, in the morning I allocate 45 minutes to walk the dog, get a shower and eat breakfast. I don't go over that time frame. Sometimes I'll wind up getting done early, in which case I might pick up another Kanji or two. If I'm running late, I'll grab breakfast as I go out the door to work. However, I try to keep that as rare as possible. Obviously there are things that come up in life that you can't control, and that's okay. Just work with it the best you can. Time-boxing is only as effective as you make it, in your situation. Okay, so what if I HAVE to be done with something, but go over my allotted time? Honestly, it can happen. But try some various things to work around it or prevent it (or at the very least reduce how much you go over). For example, when I have a coding project that's due, the due date isn't going to change because it took me longer to program something. It's still due when it's due. Sometimes features or "extras" have to be cut to meet the deadline, and that's okay. Is there any advice for avoiding this? Some, yes. Let's face it - sometimes running over schedule is out of our control, and there's nothing that can be done about it. This should be rare. Some ideas to consider to avoid this: - Think ahead of time how much time it might take to complete a task. Compare that to the deadline, and cut things if you have to ahead of time. Sometimes it helps to come up with a detailed checklist ahead of time, and check them off as you go. - Fix problems as you go. If you run into an issue early on, fix it right then. That issue might come back and bite you later if you don't (and probably on a larger scale). Procrastinating on fixing problems can make them take 10 or more times longer later on that if you'd just fixed it when you found it. - If you can't figure something out in a reasonable amount of time, ask for help. The internet is a vast wealth of knowledge and people willing to help folks with whatever it is they are working on. Use that to your advantage, and ask. There's no such thing as a stupid question, except for those which aren't asked. - Monitor your time usage as you go. This is invaluable, as it will help you to forecast your progress down the road closer to completion. You can use it to cut certain features. Don't forget that something you cut can always be added later if you have time. - Don't make excuses for running over. Especially when you first start, if it happens, beat yourself up a little over it. Not a lot, but a little to instill in your mind that it's not a direction you want ton continue in. If you run over, accept it, but don't do it again next time. Basically, always aim to hit your goal on target, or even before hand - but not so much beforehand that you miss things you could have fixed, done or added. If you use time-boxing strictly, as if it's the law you'd be breaking by going over, your efficiency will skyrocket. I should add also that I add Japanese to everything I do if I can. I listen to it while I work, walk the dog, etc. When I'm driving home, I have Japanese radio playing and I repeat Japanese sentences as I hear them, even if I don't understand them. So, while I generally get about 10+ hours a day immersed in Japanese in some form, I'm guaranteed 1 hour per day by time-boxing my schedule There's one more point to make. Don't forget to take breaks. Time-boxed breaks. Your mind and body need rest, so be sure to allow for it. Just make sure you time-box it as well. So many people forget to do this, and not taking breaks will lead to getting "burned out", thus reducing productivity vastly.
OPCFW_CODE
Loading in 2 Seconds... Loading in 2 Seconds... Tony F. Chan Math Dept, UCLA. S. Esedoglu Math Dept, Univ. Michigan. TVL1 Models for Imaging: Global Optimization & Geometric Properties Part I. Other Collaborators: J.F. Aujol & M. Nikolova (ENS Cachan), F. Park, X. Bresson (UCLA). Research supported by NSF, ONR, and NIH. Math Dept, UCLA Math Dept, Univ. Michigan J.F. Aujol & M. Nikolova (ENS Cachan), F. Park, X. Bresson (UCLA) Research supported by NSF, ONR, and NIH. Research group: www.math.ucla.edu/~imagers * First proposed by Rudin-Osher-Fatemi ’92. * Allows for edge capturing (discontinuities along curves). * TVD schemes popular for shock capturing. Rudin-Osher-Fatemi: Minimize for a given image : Model is convex, with unique global minimizer. Model is non-strictly convex; global minimizer not unique. Discrete versions previously studied by: Alliney’96 in 1-D and Nikolova’02 in higher dimensions, and E. Cheon, A. Paranjpye, and L. Vese’02. Is this a big deal? Other successful uses of L1: Robust statistics; l1 as convexification of l0 (Donoho), TV Wavelet Inpainting (C-Shen-Zhou), Compressive Sensing (Candes, Donoho, Romberg, Tao), Contrast invariance: If u(x) is the solution for given image f(x), then cu(x) is the solution for cf(x). Contrast & Geometry Preservation: Let , where is a bounded domain with smooth boundary. Then, for large enough , the unique minimizer of E1(¢,) is exactly . The model recovers such images exactly. Not true for standard ROF. (Other method to recover contrast loss: Bregman iteration (Osher et al)) “Scale-space” generated by the original ROF model Plots of vs. Discontinuities of fidelity correspond to removal of a feature (one of the squares). (related: Tadmor, Nezzar, Vese 03; Kunisch-Scherzer 03) TVL1 decomposition gives well separated & contrast preserving features at different scales. E.g. boat masts, foreground boat appear mostly in only 1 scale. Motivating Problem: Denoising of Binary Images Given a binary observed image find a denoised (regularized) version. Denoising of fax documents (Osher, Kang). Understanding many important image models: ROF, Mumford-Shah, Chan-Vese, etc. Take f(x)=1(x) and restrict minimization to set of binary images: Considered previously by Osher & Kang, Osher & Vese. Equivalent to the following non-convexgeometry problem: where denotes the symmetric difference of the sets S1 and S2. Existence of solution for any bounded measurable . Global minimizer not unique in general. Many local minimizers possible. Illustration of how algorithms get stuck: To find a solution (i.e. a global minimizer) u(x) of the non-convex variational problem (same as ROF for binary images): it is sufficient to carry out the following steps: Then is a global minimizer of the original non-convex problem for almost every choice of . For each upper level set of u(x), we have the samegeometry problem: Function values agree on u binary Solution u Space Intermediates, showing the evolution: Intermediates non-binary! The convex TV-L1 model opens up new pathways to global minimizer in the energy landscape. Integrand depends on explicitly, and not only on the super level sets of f; so these terms are not purely geometric: Solving different geometric problems at different levels. Chan-Vese Model (2001): Simplified Mumford-Shah: Best approximation of f(x) by two-valued functions: Variational CV Segmentation Model: Similar arguments as for shape denoising show CV is equivalent to: Theorem: If (c1,c2,u(x)) is a solution of above formulation, then for a.e. µ in (0,1) the triplet: is a global minimizer of the Chan-Vese model. UPSHOT: For fixed c1, c2 the inner minimization (i.e. the shapeoptimization) in our formulation is convex. Also, the constraint on u can be incorporated via exact penalty formulation into an unconstrained optimization problem: where z() looks like: Turns out: For large enough, minimizer u satisfies u(x) in [0,1] for all xin D. Solve via gradient descent on Euler-Lagrange equation. F,f are the sources and sinks, and w is the capacity constraint If C is a cut then the CMF is given by minimizing the isoparametric ratio: The CMF is It is a conservation flow (Kirchhoff’s law, flow in=flow out). [AT] proposed to solve the CMF solving this system of PDEs: We may notice that these PDEs come from this energy: (Weighted TV Norm) Optimizing the CEN model corresponds to solve the CMF problem: The previous CMF problem can be solved by the system of PDEs: which comes from this energy: which is the CEN energy!
OPCFW_CODE
Why does my paperjs app take so much CPU power and slow down? I have this Sketch. You can pilot the ship by using up to accelerate and left and right to rotate the ships orientation. (Although it doesn't always work on Sketch due to the inability to blur from the editor and so keypress events don't register in the canvas window) Each frame it adds the point at the ship's current centroid to the full ship track. This all works fine and dandy. Problems set in after about 2 minutes or so - depending on your computer - of flying. Gradually the whole thing starts slowing down and the frame rate drops to visible levels. Initially I thought this is because each point needs to be stored in RAM and there are too many of them, but the tab's memory doesn't seem to go up noticeably. CPU usage does seem to rapidly rise to ~20% and generally stays there. Does anyone have an explanation or fix for this? I don't understand fully what you're doing but it appears that you're either creating a new entity/line/graphic per frame tick to draw the line behind the ship. Is that correct? Are you adding a display entity per line segment? Are you able to just draw to an existing layer. Usually when I see something like this (e.g. blood splatter on floor from killed enemy in 2d top-down shooter), they paint addt. items to an existing canvas, not adding new display entities. I'm just throwing ideas out there btw. Have you used paperjs before? In paperjs you can create a path and then just add points to it, and it will display the entire path with all its points. They're not separate paths though, just points joined together. Paperjs takes care of rendering frames so that you don't need to worry about animatng movement. You can just specify what should change in each frame without redrawing the whole scene. I do create a new path but only when the ship goes outside the window's boundaries, otherwise there would be lines from the top of the screen where it disappeared to the bottom where it reappeared. no, never used PaperJs before but done plenty of game programming to get the gist of the API. Their API docs are really nice. I may need to check this lib out for a game I'm working on. Yeah the paperjs library is really really sweet. Very easy to use! Ok I did a little reading up on this. Your continuously adding to your ship.path in your frame tick callback. Try limiting the maximum size of your path like so: var MAX_SEGMENTS = 1000 ship.path.add(centroid(ship.shipPath)); function onFrame(event) { if (ship.path.length > MAX_SEGMENTS) { var d = ship.path.length - MAX_SEGMENTS ship.path.removeSegments(0, d) } updated sketch just noticed this doesn't work if you're going off-screen and then moving to the opposite position. That will be something you need to work into your logic for that. In game development, you gotta watch your resources well. Unless you're painting to a canvas where there is basically a set amount of pixels in the canvas and you're just coloring them, then using a display list type approach can get you in trouble if you're not careful Yeah I just noticed that too. That I could work around. However the main problem now seems to be that the tab jut crashes completely and the console says Uncaught TypeError: Cannot set property '_path' of undefined You can also use path.simplify(tolerance) to cut down the number of segments. If you simplify the path every 50 frames or so then add those points to a larger simplified path, you can get a much more memory-efficient segment array. So I did consider that as an option - but memory doesn't seem to be the problem. The tab's memory doesn't go up at all. So the problem must be something else.. no? Well, the memory used is negligible, but the number of drawing commands to the canvas context is going up with every frame. I think that drawing a couple thousand line segments is less efficient than drawing a few curves, but I can't say I've run the tests. Ah so you mean it redraws each line segment for each frame?? Yes, on each frame there is a complete redraw, unless you specify some other way Ok that makes a lot of sense! Explains why memory is small but CPU goes up and frame rate drops Ok I've reaccepted the answer. My problem of the tab crashing was because i had set MAX_SEGMENTS too low and so sometimes too large a chunk of the path would get removed, meaning that there was no more path to be removed and so paperjs would throw a fit. Setting MAX_SEGMENTS to a decent length means that this won't happen.
STACK_EXCHANGE
to ISP or not to ISP? I've got a design problem, and I figure it's not something totally uncommon so there must be some good practices out there. I have 2 domain entities: Process and Task. A Process is essentially a list of Tasks. The functionality of a Process is to know what kinds of tasks it consists of (Task instances are created using a Factory when process enters 'Started' status), and to evaluate it's status based on the status of included Tasks. A Task has a lot of functionality, but what the Process actually only needs from it is it's status, so I figured I should apply ISP here. For that purpose I created an interface TaskStatus, which is implemented in the base abstract Task class, and the Process holds a collection of TaskStatus objects. Does it make sense up to this point? Now the problem is that a different component, let's call it TaskProcessor, gets the list of Tasks from a Process, and needs a different kind of access to the Task objects. How do I solve this? One way I figured out that would let me keep ISP is to move the list of Tasks out of the Process, into a global singleton TaskRepository. Both Process and TaskProcessor could query TaskRepository to get what they need without being dependent on any functionality they don't need. I don't particularly like this solution because of 3 things: 1. Generally I think that the less global singletons the better 2. It moves the responsibility of holding a list of Tasks out of the Process, which might be a good thing from a SRP perspective, but I do believe that it actually belongs in the Process from a DDD perspective. 3. That's quite a lot of added complexity compared to just changing the type of the Task list in Process and being done with it. You said a Process holds a list of associated Tasks. How does the Process distinguish between each task if it [the Process] only holds statuses? It seems like the TaskStatus interface should extend an Identifiable interface, that way the tasks containing only status up to this point also contain the ids of the tasks and those ids can then be used to fetch full fletched task models by the TaskProcessor. The process doesn't actually need to distinguish between its tasks. Anyway, if I understand correctly, you do recommend a central repository where tasks are stored and can be accessed by the TaskProcessor using an ID? If the job of a Process is to provide Tasks to a TaskProcessor so that it can do Task-specific things with them (as opposed to merely TaskStatus things), then it does need more than just the status, and it should hold Task objects, not TaskStatus references. The question is whether it really is the job of a Process to feed data to a Processor, or whether the Processor should get its input from another place in the system - perhaps the same place the Process got it from.
STACK_EXCHANGE
When you attempt to run a technical audit on your website, you might find it fights back. The good news is that the most common server errors are easy to fix with a bit of knowledge and patience. Here is a list of some of the most prevalent server errors that people encounter and what measures you can take to fix them. HTTP errors are messages sent by the server that show something has gone wrong, broadly speaking. These are all the errors that start with 4 and 5. You see 401 Error (Unauthorized) when you’re not authorized to access a page. You need valid login details to stop seeing this message. This is similar to the previous one, except you might have valid login details but still aren’t authorized to access a page. The website admin needs to update your credentials. A 404 error message appears when you try to access a webpage that doesn’t exist. This might have happened because you’re entering the wrong URL, the link is broken, or a redirected page has become invalid. You can fix this error by entering broken link redirects in the platform your website runs on or reviewing crawl errors in Google’s Search Console. If this doesn’t sound like something you want to struggle with, ask your web developers to add in the redirects. This is another prevalent error. The so-called gateway timeout occurs when a server tries to load a page but doesn’t get a timely response from another server. There are a few ways to fix 504 error. You can run DBManager or another WordPress plugin (if your site is on WP) if a corrupted database is the reason. The plugin will repair and optimize your database. Alternatively, you could have a problem with your htaaccess file. The Internal Server Error can come from a permissions error, a coding error, or a PHP timeout. Incorrect permission in one or more folders or files is the most frequent cause. Usually, the wrong permission on a CGI and PHP script is the reason. An error in .htaccess is less common, but it’s worth checking your site’s .htaccess file. If external resources are connected to your script, and they time out, an HTTP 500 error is the result. This error doesn’t always have the same wording. It might appear as any one of the following: A 500 Internal Server Error will appear in any operating system and browser because the website you’re trying to access generates it. You can troubleshoot the exact cause through your WordPress or whichever content management system you are using. Follow the instructions to diagnose and fix the issue. Your contact forms should be brief and to the point so your visitors aren’t scared away. Avoid asking too many questions because it can drive away potential leads. Only very basic information is needed, like a name, company name, and email, because the purpose of contact forms (also known as conversion forms) is to collect the minimal data needed to qualify leads. You should check your website analytics if you’re getting far fewer website conversions than usual. There might be a broken link, or your site is loading more slowly, and people are navigating away from it. Look at the accounts in your search console or analytics – they’ll reveal the cause right away.
OPCFW_CODE
It was a typical, sunny day in the land of Wojo. As well as in her office, our heroine read through some notes on her workdesk, preparing to write a tale. The townspeople—– also known as her 2 canines—– were resting quietly at her feet. It was a good day. Up until it got here & hellip; ZOOOOOOOMMMM & hellip; FLAP, FLAP, FLAP! Out of relatively nowhere came a moth. However this wasn’& rsquo; t just any kind of moth. With a wingspan of concerning eight feet, this enormous moth flew past our heroine’& rsquo; s face, surprising her. Let’& rsquo; s go to Wojo currently for a firsthand account: Like they claimed, I was just minding my very own company, when suddenly the greatest moth in the history of humankind flew out—– where I still wear’& rsquo; t know– as well as zoomed like a shot right past my face. I recognize they claimed it had an eight-foot large wingspan, yet I’& rsquo; m presuming it was extra like 7,264 feet. My hubby said that something like that couldn’& rsquo; t fit in our house. Mr. Rational. But he wasn’& rsquo; t below when it initially made an appearance, so I’& rsquo; m going with my gut on this one . First– let me say that, by and large, I’& rsquo; m n ot terrified of bugs. Unless they hurt. If they are stinging insects, after that I am afraid of them. Oh, and camel crickets. Do you understand what they are? They are terrible. Take a look here–– eek! I get the chills simply checking out them. We obtain them in our cellar in the summer season. And also they leap—– like they’& rsquo; re jumping right out of a scary flick and also into my face! Dang—– there’& rsquo; s one other kind of insect I’& rsquo; m scared of: Massive ones. Like the ones that individuals post around on social networks. While I have buddies who stay in Australia, they never ever appear to experience pests of this dimension. Yet most of these extra-large pests are said to find from there. As well as, well, they terrify the bejeezus out of me. I won’& rsquo; t be heading to Australia any time soon. Shudder & hellip; So, I presume I am scared of some insects. However I’& rsquo; ve never been frightened of a moth. Previously & hellip; Since this moth was tricky. It would fly down into my face, simply enough for me to go crazy and also shout. After that, while I was running about, waving my arms, and shrieking for my partner, it simply disappeared. Seriously—– it was like it went “& ldquo; poof & rdquo; as well as just wasn & rsquo; t there anymore. My husband maintained coming up to order it for me (we attempt as much as possible to “& ldquo; catch-and-release & rdquo; the bugs in our home. Unless they sting & hellip; or are camel crickets & hellip; or substantial). Every single time he set foot into my office, Mothra, as I was calling it, disappeared. I was being gaslighted by a moth. Why couldn’& rsquo; t it be dumb like various other moths as well as fly around the lights on my ceiling fan? Why couldn’& rsquo; t it go with the light of the open home window? Why couldn’& rsquo; t it fly near my computer display? Anywhere where we could see it? Since Mothra was bent on get me. Here’& rsquo; s exactly how it went: Mothra flies right into my face. I scream. I lack my office. My other half comes in. Mothra is gone. Repeat regarding a loads times. When it comes to my pets—– they were excited that seemingly something was going on, as I maintained jumping in and out of my chair and lacking my office, after that slinking back in. However it’& rsquo; s not like they were mosting likely to secure me. They certainly weren’& rsquo; t searching for the biggest moth in deep space. After concerning thirty minutes of this—– Mothra swooping, my hubby browsing, nothing, over and over again—– Mothra made a mistake. It landed exactly on my workdesk. I didn’& rsquo; t believe I was an awesome. Yet I increased a book in my right-hand man slowly, after that brought it down with a collision onto the desk. I didn’& rsquo; t intend to pick it up. I felt negative. Expected it had a moth household that would certainly miss him if he didn’& rsquo; t return home that evening. Little moth youngsters so distressed that daddy never returned. I slowly got guide. Mothra wasn’& rsquo; t there. My husband can be found in a few minutes later on. “& ldquo; Hey, you understand that moth you couldn & rsquo; t find? It flew out into the hall, and arrived on the wall surface. I caught it as well as weep the front door.” & rdquo; I put on & rsquo; t recognize just how my spouse fit a moth with an eight-foot wingspan into his hands. But I didn’& rsquo; t intend to ask inquiries. I felt in one’s bones that he had actually slipped up. Mothra was available. It would certainly return. But next time, I will certainly be ready. Michele “& ldquo; Wojo & rdquo; Wojciechowski, when she’& rsquo; s not going crazy that Mothra is going to return to her office as well as obtain stuck in her hair, creates “& ldquo; Wojo & rsquo; s World & reg; & rdquo; from Baltimore. She & rsquo; s also the writer of the award-winning book Following Time I Relocate, They’& rsquo; ll Lug Me Out in a Box. You can connect with Wojo on or on. Did you recognize that Wojo has an e-newsletter? It’& rsquo; s filled with fun tales, realities, and contests. And also she won’& rsquo; t spam you due to the fact that she doesn’& rsquo; t’understand exactly how, and also it & rsquo; s negative Karma. Email her at email@example.com to subscribe. A lot more Wojo’& rsquo; s Globe: Removing Piles of Paper A Cut Above the Relax The Write Task, and the Wrong Ones
OPCFW_CODE
GPGPU Programming for Games and Science (Anglais) Relié – 1 octobre 2014 |Neuf à partir de||Occasion à partir de| - Choisissez parmi 17 000 points de collecte en France - Les membres du programme Amazon Premium bénéficient de livraison gratuites illimitées - Trouvez votre point de collecte et ajoutez-le à votre carnet d’adresses - Sélectionnez cette adresse lors de votre commande Les clients ayant acheté cet article ont également acheté Descriptions du produit Présentation de l'éditeur An In-Depth, Practical Guide to GPGPU Programming Using Direct3D 11 GPGPU Programming for Games and Science demonstrates how to achieve the following requirements to tackle practical problems in computer science and software engineering: - Quality source code that is easily maintained, reusable, and readable The book primarily addresses programming on a graphics processing unit (GPU) while covering some material also relevant to programming on a central processing unit (CPU). It discusses many concepts of general purpose GPU (GPGPU) programming and presents practical examples in game programming and scientific programming. The author first describes numerical issues that arise when computing with floating-point arithmetic, including making trade-offs among robustness, accuracy, and speed. He then shows how single instruction multiple data (SIMD) extensions work on CPUs since GPUs also use SIMD. The core of the book focuses on the GPU from the perspective of Direct3D 11 (D3D11) and the High Level Shading Language (HLSL). This chapter covers drawing 3D objects; vertex, geometry, pixel, and compute shaders; input and output resources for shaders; copying data between CPU and GPU; configuring two or more GPUs to act as one; and IEEE floating-point support on a GPU. The book goes on to explore practical matters of programming a GPU, including code sharing among applications and performing basic tasks on the GPU. Focusing on mathematics, it next discusses vector and matrix algebra, rotations and quaternions, and coordinate systems. The final chapter gives several sample GPGPU applications on relatively advanced topics. Available on a supporting website, the author’s fully featured Geometric Tools Engine for computing and graphics saves you from having to write a large amount of infrastructure code necessary for even the simplest of applications involving shader programming. The engine provides robust and accurate source code with SIMD when appropriate and GPU versions of algorithms when possible. Aucun appareil Kindle n'est requis. Téléchargez l'une des applis Kindle gratuites et commencez à lire les livres Kindle sur votre smartphone, tablette ou ordinateur. Pour obtenir l'appli gratuite, saisissez votre numéro de téléphone mobile. Détails sur le produit Commentaires en ligne Commentaires client les plus utiles sur Amazon.com (beta) Chapter 1 is a brief overview of the subject matter. Chapter 2 deals primarily with finite binary encodings of the reals, with emphasis on the IEEE 754 floating-point standard. Much of this material could have been pared down or omitted. Even "Numerical Recipes in C", a book entirely about numerical methods, spends only a few pages on such low-level technicalities. Chapter 3 is a discussion of SIMD computing. It covers some useful techniques for avoiding branching, and places a heavy emphasis on polynomial approximations of common arithmetic and trigonometric functions. Eberly takes a principled approach using the Chebyshev equioscillation theorem (in particular, the Remez algorithm for computing minimax approximations). The second half of the chapter is somewhat redundant due to the presence of intrinsics in HLSL for all of the described functions. Chapter 4 is the first practical chapter in the book. It introduces the 3D graphics pipeline, with a brief discussion of coordinate spaces, projection, and rasterization. A few trivial shaders are decompiled and analyzed, but this chapter is mostly a laundry list of the steps required to do useful work with Direct X 11. Everything you would expect to see is here: devices, contexts, swap chains, buffers, textures, states, shaders, and techniques for copying from CPU to GPU and vice-versa. A few, mostly trivial, examples are scattered throughout, but very little of this material is motivated. We're now halfway through the book, page-wise, and we haven't seen any practical compute shaders yet. A bit curious for a book with "GPGPU" in the title. Chapter 5 is a grab-bag of OOD, debugging, performance, and testing advice. There are a few useful tidbits here. Chapter 6 is yet another 90-page chapter with hardly any content relevant to GPGPU. We get coverage of the geometric and algebraic properties of vectors, matrices, and rotations, and a quite thorough discussion of coordinate space conventions, but it's hard to see how any of this relates to work one might be interested in doing on the GPU that isn't directly related to 3D rendering. Chapter 7 redeems the book somewhat. It contains a survey of GPGPU implementations of various problems in collision detection, physical system simulation, image processing, and level set extraction. These are all well-illustrated and lucidly explained. So, now for the verdict. I can't imagine an audience that will find this book indispensible. Chapters 2, 3, 5, and 6 could be condensed to about 20 pages total while retaining most of their value. The content of Chapter 4 is better covered in a book expressly on DX11 and HLSL, such as Varcholik's. The actual GPGPU examples are worth studying on their own, but comprise so little of the book's contents. The GPU Gems and GPU Pro series have roughly the same proportion of GPGPU content in each volume, and the techniques are generally self-contained and lavishly illustrated. A final note: Eberly's codebase (GTEngine) is currently implemented only in DirectX, and thus is of limited utility to non-Windows users. By the time it is ported to OpenGL and GLSL, it will probably have undergone architectural shifts (as did every version of Eberly's Wild Magic / Geometric Tools codebase when he was writing his 3DGEA and 3DGED texts). Nevertheless, it does make for interesting reading. I’m a tough critic. I’m also an unpaid one: I spent a few hours with this book, but certainly did not read it cover to cover (though I hope to find the time to do so with this one for topics I know nothing about). This book is tangentially related to computer graphics, but I mention it here anyway. Unlike most books about GPGPU programming, this one does not use CUDA, but rather uses DirectX’s DirectCompute. I can’t fairly assess this book, as I still haven’t taken on GPGPU. While the book is ostensibly about GPU programming, computer graphics sneaks in here and there, and that I can comment on. Chapter 4, called “GPU Computing”, is the heart of the book. However, it spends the first part talking about vertex, pixel, and geometry shaders, rasterization, perspective projection, etc. Presenting this architecture is meant as an example of how parallelism is used within the GPU. However, this intent seems to get a bit sidetracked, with the transformation matrix stack taking up the first 8 pages. While important, this set of transforms is not all that related to parallelism beyond “SIMD can be used to evaluate dot products”. For most general GPGPU problems you won’t need to know about rendering matrices. 8 pages is not enough to teach the subject, and in an intermediate text this area could have been left out as a given. Chapter 6, “Linear and Affine Algebra”, is an 84 page standalone chapter on this topic. It starts out talking about template classes for this area, then plows through the theory in this field. While an important area for some applications, this chapter sticks out as fairly unrelated to the rest of the chapters. The author clearly loves the topic, but this much coverage (a fifth of the book) does not serve the reader well for the topic at hand. I was strongly reminded of the quote, “In writing, you must kill all your darlings”. You have to be willing to edit out irrelevant pieces, no matter how sound and how much you love them. The author notes in the introduction, “I doubt I could write a book without mathematics, so I included chapter 6 about vector and matrix algebra.” The nature of the physical book market is “make it thick” so that it looks definitive. Putting tangential content into a book does the customer who is paying and spending time to learn about GPGPU programming a disservice. I don’t blame the author in particular, nor even the publisher. Most technical books have no real editors assigned to them, “real” in the sense of someone asking hard questions such as, “can this section of the book be trimmed back?” We have to self-edit, and we all have our blind spots. Overall I’m a bit apprehensive about truly reading this book to learn about GPGPU programming. I had hoped that it would be a solid guide, but its organization concerns me. It seems to go a few different directions, not having a clear “here’s what I’m going to cover and here’s what you’re going to learn” feel to it. A lot of time is spent with groundwork such as floating point rounding rules, basic SIMD, etc. – it’s not until 123 pages in that the GPU is mentioned. The book feels more like a collection of articles about various elements having to do with performing computations efficiently on various forms of hardware. That said, Chapter 7, “Sample Applications”, does offer a fairly wide range of computational tasks mapped to the GPU. It’s a chapter I’ll probably come back to if I need to implement these algorithms. The author is a well-respected veteran and I trust his code to be correct. He’s done wonderful work over the years in growing his Geometric Tools site – it’s a fantastic free resource (at one point I even tried to find external grants to support his work on the site - no luck there. A MacArthur Fellowship sent his way would be great). What might have made more sense is a focused, stripped down book, half of chapter 4 and all of chapter 7, offered for $10 as an ebook treatise. GPUs are used for both rendering and computation. Term GPGPU denotes general purpose computation on GPU and that was what I expected from the book. Unfortunately, going through the TOC I realized that something is wrong with the title. Namely, book has 429 pages and only 1/5 of it is devoted to the GPGPU topic!
OPCFW_CODE
Site2Site-OpenVPN Tunnel routing wont work on one of two tunnels I created the Tunnel following this guide. We have several sites and a host. With some sites, we have to use IPsec, as the remote hardware (fritz.box) only does IP sec and is very slow at it. So I changed one location from IPsec in Fritzbox to oVPN in OMV4. Works like a charm. I copied this to an other location, no luck. routing just wont work. From Host, i can ping VPN IP on client, but not network behind it From client remote network, i can ping VPN IP on client, but not network on host Here is the network grid as pictrure and as a description: The host (a hetzner machine) does run a VM with pfSense. host with the network 10.100.111.0/24 with pfSense being 10.100.111.1 loc1 (changed sucessfully from IPsec to oVPN) with the network 10.101.111.0/24 with the oVPN client being 10.101.111.11 (OMV4 debian server and static routs from Fritz.Box to .11 for the host network, not the tunnel) Tunnel is 10.10.111.0/28 while host is .1 and OMV client is .2 (works greate) loc2 with the network 10.102.111.0/24 with IPsec to Fritz.Box (.1) (works greate while beeing slow, but ok for current use) loc3 with the network 10.7.0.0/24 with IPsec to Fritz.Box (.1) is the one I want to change from FB(IPsec) to APU(pfSense oVPN) For this I have a test hardware: pcengine.ch APU1D4 (i know, no crypto, but still 40% more power than the Fritbox) this is also a pfSense OS. I configured the FritzBox to have the APU1D3 (the one and onyl client on 10.7.0.0 net) as an exposed client, full port forward, no joy. I want to have 10.103.111.0/24 as local net. every single hardware behind the APU. I have no inter site traffic, only site2host. No matter what I try on the host or the remote system, i cant get through. It is a exact copy of the loc1 settings, only IP chagned to other tunnel and local net. routing just wonk work. I even set 3 servers, to make sure the stuff wont get mixed up in the host. no joy. I dont know what els I shold try. I apriciate any advice. Pictures from server Settings: 3 servers, becaus the roadwarrios and the 2nd site the routing just wont work, This is all client override for remote networks site2site (site1 and site 2) This is client override setting for remote network on site2site This is server3 setting for IP ragens and Firewall rules client site: (follows, need to change the computer :-)) tunnel is up and running (host network can ping ip of tunnel client (10.11.111.2), but not remote network. firewall settings on client Post the server1.conf from the server and client1.conf from the client. cleand server.conf pls pn me if I missed something. and the client conf @mannebk my apologies, please post the .conf files located here -> /var/etc/openvpn. You can get to it via the shell or Diagnostics -> Edit File in the GUI. The server-side will have a serverX.conf and the client-side will have a clientX.conf file. If it's the first server or client that was configured on the box, they should be server1.conf and client1.conf. The .conf files are much easier to read than .xml (for me at least). dev ovpns2 verb 1 dev-type tun dev-node /dev/tun2 writepid /var/run/openvpn_server2.pid #user nobody #group nobody script-security 3 daemon keepalive 10 60 ping-timer-rem persist-tun persist-key proto udp4 cipher AES-128-CBC auth SHA256 up /usr/local/sbin/ovpn-linkup down /usr/local/sbin/ovpn-linkdown local ip-deleted tls-server server 10.11.111.0 255.255.255.0 client-config-dir /var/etc/openvpn-csc/server2 ifconfig 10.11.111.1 10.11.111.2 tls-verify "/usr/local/sbin/ovpn_auth_verify tls 'bk2host-server' 1" lport 1195 management /var/etc/openvpn/server2.sock unix max-clients 4 push "route 10.100.111.0 255.255.255.0" push "route 10.101.111.0 255.255.255.0" push "route 10.102.111.0 255.255.255.0" route 10.103.111.0 255.255.255.0 ca /var/etc/openvpn/server2.ca cert /var/etc/openvpn/server2.cert key /var/etc/openvpn/server2.key dh /etc/dh-parameters.2048 tls-auth /var/etc/openvpn/server2.tls-auth 0 ncp-ciphers AES-128-GCM compress persist-remote-ip float topology subnet sndbuf 1048576 rcvbuf 1048576 dev ovpnc1 verb 1 dev-type tun dev-node /dev/tun1 writepid /var/run/openvpn_client1.pid #user nobody #group nobody script-security 3 daemon keepalive 10 60 ping-timer-rem persist-tun persist-key proto udp4 cipher AES-128-CBC auth SHA256 up /usr/local/sbin/ovpn-linkup down /usr/local/sbin/ovpn-linkdown local 10.7.0.104 tls-client client lport 0 management /var/etc/openvpn/client1.sock unix remote deleted 1195 ca /var/etc/openvpn/client1.ca cert /var/etc/openvpn/client1.cert key /var/etc/openvpn/client1.key tls-auth /var/etc/openvpn/client1.tls-auth 1 ncp-disable compress resolv-retry infinite On the server-side (if that's the right config), looks like it's set up as a remote access server, which isn't what you want. You need to change the server mode to one of the Peer to Peer options and configure the server for either a shared key or PKI setup. On the client-side, the client is not routing any networks over the tunnel. So, there appear to be several issues: - The server-side needs to be reconfigured for Peer to Peer mode - The client-side is not routing any networks over the tunnel. a. If the objective was shared key, here's one of your issues b. If the objective was PKI, the server-side will need iroute statements for the client's network(s) in the CSO section - The client override screenshot posted in your OP is missing an entry in the "IPv4 Remote Network/s", which will autogenerate the iroute statements needed for the server to reach the client's network behind this connection. Assuming you went with a PKI setup. - This is unlikely, but the client-side is double NAT'd behind an edge device, so if basic end-to-end IP communication still isn't working after making your corrections, it's possible that the client may need a static route on the edge device for the tunnel network.
OPCFW_CODE
It’s been a long time since the last time I had done any cleaning in the G-Loaded Forums. I use the forums for further discussion about the published content, since the comments are disabled after a period of time. During the last months the place had been left at the hands of bots. But this is no more. Below you will find information about all the actions I took in order to cleanup the user accounts created by bots and prevent further automatic user registrations. The software I use is bbPress. In order to protect the forums from automatic user registrations, I installed the recaptcha plugin. This requires registration at recaptcha.net and the creation of a private/puplic key pair, but the procedure is straightforward, so I won’t go into the details. Second, but equally important, is the deletion of user accounts created automatically by bots. The characteristic of those accounts is that they have never posted any posts. All the advertising information existed in the user profile fields. After inspecting the database structure for a while, I deleted all the users with zero number of posts from the bb_users table using the following MySQL query: DELETE FROM bb_users WHERE id IN ( SELECT id FROM ( SELECT DISTINCT(u.id) FROM bb_users AS u LEFT JOIN bb_posts AS p ON p.poster_id=u.id LEFT JOIN bb_usermeta AS m ON m.user_id=u.id WHERE m.meta_value LIKE "%\"member\"%" AND p.post_time IS NULL ) AS bb_non_posters ); Then I deleted all the user metadata for non-existent users from the bb_usermeta table using the following query: DELETE FROM bb_usermeta WHERE umeta_id IN ( SELECT umeta_id FROM ( SELECT m.umeta_id FROM bb_usermeta AS m LEFT JOIN bb_users AS u ON u.id=m.user_id WHERE u.id IS NULL ) AS bb_unlinked_meta ); The above will eventually delete all user accounts and their meta data when no posts have been published by those users. That means that even legitimate users with no posts will be deleted, but such users are very rare. If your account has been deleted, please make a new one. Normally, further clean up of the tags tables should be performed, but since those users hadn’t posted anything, it is very unlikely that they had created any tags, so I think the above are just enough. Using the above queries I got rid of thousands of user accounts created by bots. Make sure you have backed up your database before attempting any manipulation of the data. Reclaiming the forums from bots by George Notaras is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Copyright © 2009 - Some Rights Reserved Thanks for the link back to recaptcha bbpress, it is most appreciated, and I’m so happy that it’s working for you! Mind I ask what version of bbPress are you using (if you don’t want to broadcast this in the forums, then drop me an email)? Reason I ask is that I’m collecting version numbers my software is compatible, incompatible with. I suspect, more than likely, is that my software works with certain templates, rather than certain version numbers, as I’ve managed to get it working with some versions of 1.0.2 branch of bbPress, but other people are reporting problems. If you could let me know this, that would be great, thanks again! Hi Rhys. Thanks for the great plugin. Those automatic user registrations were a real issue here. I use the latest stable version of bbPress, 1.0.2 at the time of writing, and had no problems in getting your plugin working with the registration form. One thing I’d like to point out is that bbPress now does not allow registrations using an email address that has already been used by another registered user. But, if such a duplicate email address is used at the registration form, bbPress displays no error message at all (!!!), so it seems as if the input in the recaptcha form was incorrect. After realizing that, I had no further problems. I can confirm that it works fine here so far. Thanks for stopping by.
OPCFW_CODE
MyEclipse: MyEclipse 8.5 And Maven Configuration kt22 - Jul 14, 2010 - 08:57 PM Post subject: MyEclipse 8.5 And Maven Configuration What is the best practice for importing an existing Maven project (of type war) into MyEclipse 8.5 with goals of developing with Wicket, auto building and hot deploying with debugging enabled? Currently I do: 1) File, Import, Maven4MyEclipse, Existing Maven Projects, choose pom.xml directory, finish. - I get an error message: An internal error occurred during: "Importing Maven projects". Invalid thread access, although I just hit OK and continue. 2) I right click on the project, select Run As, Maven package (this creates the necessary target/myapp/ directory in the next step 3) I right click on the maven war project and select MyEclipse, Add Web Project Capabilities... - select /target/myapp for the web root dir, adjust my web context root, uncheck create web.xml, check j2ee 5.0 and uncheck add j2ee lib to buildpath (its in the maven dependencies) - click finish and say Yes to reset output folders 4) right click on the project, select properties, MyEclipse, Web and see that my changes I just made for web root and web context did NOT take effect, this is a bug. I reset my web root and web context as in step 3 and hit OK. 5) right click on my project, select properties, Java Build Path, Source tab, click on the entry for src/main/resources and remove the entry for "excluded: **". Click Add Folder and add src/main/webapp. Click on Allow output folders for source folders. Click on Output folder for src/main/webapp and change it to target/myapp. Click OK but an error message says "Cannot nest 'myapp-web/target/myapp/WEB-INF/lib/some.random.jar' inside of output folder... to get rid of this message I click on the project properties, Maven4MyEclipse, select Update Project Configuration. Go back and do the above steps. So now I have 3 folders, src/main/java, src/main/resources and src/main/webapp in the Source tab, with no includes or excludes set, with the check box set for allow output folders for source folders, with java and resources dir set for the output dir of myapp-web/target/myapp/WEB-INF/classes, the webapp dir output folder set to myapp-web/target/myapp and the default output dir set to myapp-web/target/myapp/WEB-INF/classes just to be sure. 6) Add a bogus file at src/main/webapp/WEB-INF/classes/dumby.txt to force MyEclipse to create the WEB-INF/classes dir in the output folder and actually output the .classes and resources correctly - otherwise nothing is outputted. 7) create a deployment for for my app container 8) clean the project and let it auto build. The problems I have, besides the fact that MyEclipse has bugs and the maven integration is not straight forward, is that src/main/resources is not filtered for the native MyEclipse Java Builder, ie the output to myapp-web/target/myapp/WEB-INF/classes. The Maven Project Builder that is enabled compiles and processes resources to target/classes - which doesn't do me any good if I am trying to get MyEclipse to build an exploded war dir and it is already compiling the java files anyway (why compile twice? once for the Maven Builder and once for the Java Builder). Additionally MyEclipse will not resource filter webresources that would occur during Maven's package phase, so your web.xml is not filtered. Additionally, even though no filters are set for the output directory the Wicket .html files found in the src/main/java dir do not get outputted to the output folder. How do you output the .html files to the output folder? What is the purpose of the Maven Project Builder? Why not just exec the maven goals as needed when you want to invoke maven? If the Java Builder is already compiling the classes and I cannot utilize the filtered resources in target/classes without invoking the package phase to move them to target/myapp/WEB-INF/classes, then why enable the Maven Project Builder? So although MyEclipse can 'integrate' w/ Maven projects to some extent, it is not without problems, intuitive or seemless. IntellIj works extremely well with Maven in comparison. Is there a better best practice for importing Maven projects with goals of developing with Wicket, auto building and hot deploying with debugging enabled? Is this what others are doing? tsmets - Sep 09, 2010 - 12:51 PM Post subject: RE: MyEclipse 8.5 And Maven Configuration No update on this so far .... ? Integration Maven & MyEclipse seem broken .... Directory structure is not consistent + I get this now 9/9/10 12:04:30 PM CEST: Build error for /CleanSystemsWeb/pom.xml; org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.4.1:resources (default-resources) on project CleanSystemWeb: Execution default-resources of goal org.apache.maven.plugins:maven-resources-plugin:2.4.1:resources failed: Plugin org.apache.maven.plugins:maven-resources-plugin:2.4.1 or one of its dependencies could not be resolved: The repository system is offline and the requested artifact is not locally available at /Users/tsmets/.m2/repository/org/apache/maven/wagon/wagon-file/1.0-beta-2/wagon-file-1.0-beta-2.jar from the specified remote repositories: jboss-public-repository-group (https://repository.jboss.org/nexus/content/groups/public-jboss/, releases=true, snapshots=true), central (http://repo1.maven.org/maven2, releases=true, snapshots=false), plexus.snapshots (http://oss.repository.sonatype.org/content/repositories/plexus-snapshots, releases=false, snapshots=true), apache.snapshots (http://repository.apache.org/snapshots, releases=false, snapshots=true) Path to dependency: All times are GMT - 6 Hours Powered by PNphpBB2 © 2003-2004 The PNphpBB GroupCredits
OPCFW_CODE
Winmodem on Linux I have Redhat 8 and a zoom 56k modem. It has a Lucent DSP chipset, and I downlaoded the rpm that is supposed to make it work from http://www.heby.de/ and it does, until i try to load the kernel module lt_serial.o It then gives me 13 'undefined character' errors and doesn't load the module. I sent the error messages to firstname.lastname@example.org Anyone here know what the matter is. Undefined character?! I have never heard of such an error message before. Can you post the exact output of when you try and load the driver? This is the email i sent to email@example.com: (and sorry, in the previous post i said undefined, i meant unresolved When I run the utility checkout, everything but lt_serial loads. When checkout does insmod lt_serial.o (with the full file paths too) I get this error message: /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol add_wait_queue_R11dfb1e5 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol remove_wait_queue_Rb3afbf37 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_wait_until_sent_R851abbd3 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_register_devfs_R9f8c1c23 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_hung_up_p_R4e5dde94 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_hangup_R551006f5 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_get_baud_rate_Rc113729c /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_unregister_driver_Rd509a578 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_register_driver_R84b562a2 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol tty_flip_buffer_push_R38cf3588 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: unresolved symbol do_SAK_R2e704974 /lib/modules/2.4.18-14/ltmodem/lt_serial.o: insmod /lib/modules/2.4.18-14/ltmodem/lt_serial.o failed In checkout, I continued on with the diagnostic section after lt_serial failed to load. Unloading unnecessary modules did not help. As I said before, this is the only problem I'm having with the rpm so far. I attached LTinstall.txt as recommended in your docs. And the extra R38cf3588 stuff at the end of the error lines was certain ascii symbols on linux, but when i transferred the error message via floppy to windows, windows replaced the symbols with that junk[/i] Unresolved symbol says a lot more... It seems to me that that module requires other modules to load. Have you tried modprobe lt_serial instead (providing that you've made a make install from source and run depmod -a)? I installed from an rpm (from www.heby.de) and yes, I did try depmod -a But have you tried running "modprobe lt_serial" instead of "insmod lt_serial.o"? cant remember, ill have to try it again. While im on this subject, is anyone familiar with firstname.lastname@example.org, the way they are supposed to respond to you? Do they email you back, or is your problem answered on linmodems.org in the mail archives section? Are you sure it's not a mailing list?
OPCFW_CODE
const methods = require('./lib/methods.json') var test = require('tape') module.exports = Object.assign(flipTape, test) if (global.flipTape && global.flipTape.tapeMock) { // for testing test = global.flipTape.tapeMock test.only = global.flipTape.tapeMock } String.prototype.test = function (arg1, cb) { // eslint-disable-line return flipTape(this.toString(), arg1, cb) } String.prototype.only = function (arg1, cb) { // eslint-disable-line return flipTape(this.toString(), arg1, cb) } function flipTape (arg0, arg1, cb, _, __, tape) { // arg0, arg1 are optional if (!tape) tape = test cb = cb || arg1 || arg0 return tape(arg0, arg1, testObject => { String.prototype.t = function (customCb) { // eslint-disable-line var msg = this.toString() testObject.comment(msg) customCb(testObject) } methods.forEach(name => attachMethod(name, testObject)) return cb(testObject) }) } function attachMethod (name, testObject) { String.prototype[name] = function (arg0, arg1) { // eslint-disable-line var arity = arguments.length var msg = this.toString() if (arity === 2) { return testObject[name](arg0, arg1, msg) } if (arity === 1) { return testObject[name](arg0, msg) } return testObject[name](msg) } }
STACK_EDU
M: Planned features for Emacs 24 - fogus http://lists.gnu.org/archive/html/emacs-devel/2010-03/msg00272.html R: jacoblyles If Emacs comes out with a package manager, I'll take a day off to roll around on the floor and giggle with glee. R: technomancy ...? <http://tromey.com/elpa/> R: pmiller2 elpa is good, but, since it's not exactly a standard, there aren't many packages available through it. Having a standard package manager in the emacs distribution would go a long way toward solving this problem, since people could be encouraged to package their .el files using it. R: gchpaco ELPA appears to be the one that they're going to merge into Emacs 24 (see comment about Tromey et al, and know that Tromey is the main guy behind ELPA...). R: flogic Nice so no need to worry about a fork. R: jrockway Lexbind and coroutines will be nice. But unfortunately, we are very close to the point where Emacs' internal design won't work anymore. (Fortunately, most of the important stuff in Emacs is Emacs Lisp, which is easy enough to compile and run. The VM / C core is not very complicated.) R: asolove Can you explain? I don't know anything about why the internal design won't work anymore. R: jrockway Important functionality cannot be modified without a full recompile, which is always annoying. Try calling, say, font-backend functions from lisp... and then try overriding them. All of Emacs needs to be dynamic, not just some of it. It would also be nice to allow other languages to target the Emacs VM without rewriting those languages. Something like LLVM would be a good intermediate platform; lots of compilers can generate code for it, and the various assemblies can call each other. Most users don't care, because they just want to write a function to replace < and > with &lt; and &gt;... but some people writing more complex modes would appreciate cleaner internals. Why shouldn't Emacs be as fast and accurate as Yi, after all? R: asolove Good to know, thanks! R: msg Sorry, a stray click downvoted your comment and I cannot undo. I upvoted another one of your comments to balance out the universe. R: prodigal_erik The package manager appears to be yet another single-app ghetto. From the docs: > Note that for some packages, package.el requires you to have an external tar > program. My platform _already has_ a package manager that knows whether I have tar, and it will quietly go get that as soon as I install anything that needs it. If package.el can't take advantage of a system that already works, it's part of the problem rather than the solution. R: kiiski Emacs is cross-platform. It would be kinda hard for them to start using system specific package managers (remember that windows exists too). And anyway, this is meant for installing emacs extensions, not software for whatever system you're using. R: JulianMorrison The proper solution would be 1\. Standardize the package format (say, a directory containing this-and- that). 2\. Standardize where the packages go for personal and root installation. 3\. Have an emacs package manager, that deliberately does not use its own metadata, but always uses the implied metadata of the packages in their locations. 4\. Therefore, it doesn't matter what tool put the packages in place, so long as they are in place. 5\. So use the built in package manager, or apt, or rpm, or whatever you prefer. R: kiiski But how would that solve the problem of automatic cross-platform installation of a tool required by the emacs package? As I see it, the problem is in automating step 5. I guess you can find out the installed package manager in linux somehow and then call it, but how are you going to do that in windows? R: JulianMorrison I think you misread what I was getting at. In this scenario Emacs doesn't call the package manager - if emacs manages packages it does it for itself in elisp. But it doesn't matter to emacs whether the package was installed from an elisp program or from a .deb file, because the end result (predefined format in predefined place) is the same regardless. So you're free to use, or not use, the package manager of your choice. Or mix them. R: abstractbill _GTK widget embedding code_ Does this mean embedding GTK widgets into Emacs? Or making a new GTK widget that _is_ an embeddable version of Emacs? I suspect it's the former, but if every GTK-based application could embed an Emacs editor, I think that would be pretty awesome too. R: arohner It means embedding GTK widgets into Emacs. For a while, people have been threatening to do this. Among other things, it allows you to plug in a browser control into emacs, which would be pretty awesome. Much better than dealing with w3m and the like. R: tjic For the last few years elisp has been bugging me more and more. Basically, ever since I saw how easy life is with Ruby, I've been annoyed at how hard it is with elisp. If I had six free months, I'd __love __to do yet-another-rewrite-of-Emacs, this time in Ruby, with the obvious scripting language, and rewriting all of the 100,000 function calls to be method calls on objects (make-buffer-local, etc. scream out to be replaced with concepts that are less than 30 years old...) R: gnuvince I'd rather see a clone implemented with Lua; with LuaJIT being so fast, it could be really interesting what could be accomplished in Emacs. R: docgnome A canonical package manager?! Concurrency? _swoons_ It would totally rule to be able to have a script to install all the package I use instead of having to keep them all in repo myself. Also, not having to wait for Wanderlust to fetch my mail would rock. R: almost Concurrency, package manager, GTK widget/SVG embedding? Wow! Seems like things are really moving along with Emacs! R: avar As pointed out in the thread the planned features are now maintained in the Emacs repository in etc/TODO. Here's a link to in on GitHub: <http://github.com/emacsmirror/emacs/blob/master/etc/TODO#L15> R: sethg Bidi support! Yesssss! R: cag_ii >* Concurrency? (Scrivano et al.) This would be nice! R: ramchip Now you have two problems. R: tjic LOL! Also, now _you_ have a royalties payment to JWZ. R: gnuvince About lexical binding, couldn't this potentially break many, many existing packages? How easy would it be to fix the big ones (gnus, cc-mode, gdb, vc, etc.)? R: abrahamsen Maybe not so hard. Variables defined by defvar will still have dynamic binding, and the byte compiler has warned about using undefined symbols for a decade or two. R: warfangle You know what would be great..? If it didn't litter squiggles all over my directory structure. :P R: theBobMcCormick Easily fixed (I think someone else here linked to the wiki page for the solution),but it _is_ a good example of one of the problems (IMHO) with Emacs. It comes with shitty defaults. There's something of a chicken and egg problem with getting started with Emacs. Emacs can rock very powerfully when properly configured. But out of the box it sucks rocks. It's pretty hard for a beginner to learn enough about Emacs to configure it to be worth a darn before giving up in frustration. :-) R: Naga I'm a pretty new user to emacs, and this is something that has bugged me for a while. It's one of those things I can't imagine why it would ever be a default, but something I have to live with until my abilities to configure are up to par. R: jpr It is easy to criticize something without providing an alternative. R: Naga Well now what I do is store the backups in a folder in my home directory.
HACKER_NEWS
Nested KVM won't work (guest freezes) I'm running, as host, Ubuntu 19.10 with kernel 5.3.0-40-generic. In the guest, Ubuntu 18.04.4 with kernel 5.30.0-40-generic. When I launch the Android Emulator from Android Studio, the entire guest freezes. I tried 4.15.0-60 on guest, same problem. Tried 4.15.0-1050-oem on host, same problem. UPDATE: Problem not related to Android Studio emulator, happens on virt-manager too. Seems totally related to KVM So the problem is: running any KVM emulation inside the guest makes the guest freeze I had similar problem. Try to open "Software & updates -> Additional Drivers -> " and select NVIDIA driver for your GPU instead of "nouveau" video driver. For me it solved problem. I found solution here: "https://stackoverflow.com/questions/39584765/ubuntu-16-04-1-lts-crashes-when-starting-android-emulator" @Dmitry_L the problems happens inside the VM where there are no NVIDIA drivers How many CPU Cores did you give the VM? You may need to give it a few more. Look at the system resources of the VM as you run android studio (or attempt to) and see what it requires more of. @Gordster I have 7 which is enough, my setup has 16 If the issue happened on the guest, and on the host, then the issue cannot be the guest/kvm. You are stating that you tried it on the host system, right? @Gordster no, the problem only happens on the guest, host stays normal @GuerlandoOCs did you get it work ? I'm experiencing the same issue. With a AMD processor . Guest get freeze but host still works. KVM Virtual Machine Manager The recommended amount of RAM for running Android Studio Ubuntu is 8GB. In Virtual Machine Manager the settings for virtual RAM are accessed by selecting the guest OS, then from the Virtual Machine Manager menu select Edit → Virtual Machine Details → click the blue ⓘ icon (Show virtual hardware details) to open a new window from which you select Memory and allocate at least 8GB memory to the guest OS. Virtual Machine Manager configures the graphics and hardware-assisted virtualization settings automatically by default, but you may need to enable Intel VT-x or AMD-V hardware-assisted virtualization in UEFI/BIOS. VirtualBox VT-x/AMD-V needs to be enabled to run the Android Emulator which is included with Android Studio in VirtualBox. If your computer's processor supports Intel VT-x or AMD-V hardware-assisted virtualization, there should also be settings to enable it in UEFI/BIOS. Make sure that the appropriate Intel VT-x or AMD-V settings are enabled in UEFI/BIOS. The following two checkboxes should be checked in VirtualBox Settings → System → Acceleration tab. Enable Vt-x/AMD-V Enable Nested Paging Android Studio is a very feature rich IDE, and you need to give it enough resources in order to use all of its great features. The recommended amount of RAM for running Android Studio Ubuntu is 8GB. As you get deeper into Android Studio you'll find out again and again what a resource hog it is. On a guest OS with only 4GB RAM whenever you run Android Studio's emulator, your guest OS will stop responding. Your guest OS will run smoothly if you add another 4GB RAM making it 8GB. If you assign too much memory to the virtual machine, the machine might not start, so make sure there is enough memory left over for running the physical machine. A guest OS in VirtualBox can be configured to use up to 256MB video memory. To increase the video memory to 256MB, open the terminal and type: VBoxManage modifyvm "Name of VM" --vram 256 You can also configure the number of processors in VirtualBox Settings → System → Processor tab. thank you but this is not the problem, I gave 10gb of RAM to the VM and had 20 left for the host @GuerlandoOCs Just to double check, did you make sure you selected the options listed at the very top of the answer? The part about acceleration and nested paging? @Gordster I'm using virt-manager, not virtualbox. I just searched "nested virtualization virt-manager" and the only post I found was about setting the CPU to copy host's configuration. I checked and mine is doing that. Do you know about anything more I should do? still freezes, followed exactly your steps for virtualbox - resumes finally with The emulator process for AVD Nexus_4_API_30 was killed
STACK_EXCHANGE
As of today, I no longer have a CMS backing Zero Counts, my analytics have been wiped away, and most of my posts have been removed from the website. No, this is not the next catastrophic event of 2020. The internet is not slowly dissolving like a post-Thanos snap. Quite the opposite. This is self-imposed (and maybe a huge mistake), but it’s a project I’ve been meaning to take on since the beginning of quarantine. (As if I didn’t have enough to worry about this year.) As the title states, I moved Zero Counts from a self-hosted WordPress instance hosted with DreamHost to a static site built with Gatsby.js and hosted with Netlify CDN. What does this mean? In short, Zero Counts is faster, more efficient, and better suited for SEO and accessibility. For those unfamiliar, with WordPress, all of my posts were published with the WordPress CMS and stored in a database. When you visited the website, your device had to make several roundtrips to a server to fetch each post. However, with Gatsby, the site gets regenerated every time I make a change to the codebase, including posts. Therefore, most everything — HTML+CSS and content — is compiled into a single static application that your device fetches (generally) once from a server. On top of that, when visiting the WordPress version of Zero Counts, upwards of 10 CSS files were downloaded into your device — including the Bootstrap CSS necessary for the grid system I used for page layout — totaling 225KB, give or take. This may or may not have been WordPress’s fault; maybe just my inexperience with PHP and Wordpress themes. With the Gatsby version of Zero Counts, I wrote a single bare minimum CSS file including a Bootstrap grid clone using CSS grid. On the authoring side, all of the website’s code and content are stored in a single code repository that I push to GitHub. I no longer write or manage any content in a CMS (for now). Instead, I write raw Markdown files and push them straight to GitHub. (God, I love Markdown.) I can write these in an IDE like VS Code or a Markdown compatible word processor like iA Writer. Once I finish a post, I push the file to GitHub, Gatsby re-generates the HTML for zerocounts.net, and the new post appears. I won’t go into a lengthy piece about how all of this was done. Instead, I’ll point to some resources that helped me get here: - Tania Rascia’s “The End of an Era: Migrating from WordPress to Gatsby” pointed me in the direction of ExitWP — a tool to convert Wordpress XML to Markdown. - While Rascia leveraged the Gatsby Advanced Starter, I decided that was overkill for Zero Counts and began with the Gatsby Starter Blog and incorporated bits from the gatsby-paginated-blog. - Many WordPress to Gatsby posts (including Rascia’s) pointed to Netlify for CDN hosting. If you’re not familiar, think of a CDN as a global network of servers. When a user visits Zero Counts, the user gets pointed to the server closest to them for the fastest download. (The internet still abides by physics, folks.) - I’m fairly proficient in Markdown, but I did reference John Gruber’s Markdown spec several times. - Chris Wachtman’s “How to Replicate the Bootstrap 3 Grid Using CSS Grid” was immensely helpful. The migration is not entirely complete. While ExitWP is a great tool, each post requires a bit of Markdown clean-up. Therefore, I’ve only ported over posts from 2019–2020 as well as any interlinked posts prior to 2019. I’ll be chipping away at the remainder of posts from 2013–2018 over time. Zero Counts began as a Tumblr blog called The Starr List back in 2013 or so. Over time, I moved over to WordPress.com, dabbled and stumbled around in code, migrated to a self-hosted WordPress site using a stock theme, created my own child theme, and eventually moved everything into a single codebase, allowing me quickly develop locally and offer up an optimized version of Zero Counts. It’s been a lot of work and education, and finding time is not easy. In any case, here’s to another iteration of Zero Counts. Here’s to you, old sport. I’ve been searching for the right words all morning. “Inevitable” certainly is one, but I think “overjoyed” feels better. Ever since rumors began circulating about the release of these games on Switch, the hype has felt insurmountable. We’ve been stuck at home for six months due to a pandemic, clamoring for comfort content. A dose of nostalgia this heavy is certainly what the doctor ordered. (A clinically proven vaccine wouldn’t be bad either.) While omitting Super Mario Galaxy 2 and making this collection limited until March 31st 2021 is a real bummer, it’s a thrill that this will be released in two weeks time. Tick Tock Clock is counting down. As members of the video game industry consider the power of solidarity, as video game streamers question the sustainability of their labor and the parasocial demands of their audience, as the industry-at-large considers its responsibility to the greater culture, I believe sports could and should serve as a compass. I’m no sports nut, but this piece cracked me. And it’s made more poignant after watching the sunshine and rainbows Netflix video game docuseries ‘High Score’. Great observations by Plante. After playing daily for about a week after its release, I’ve noticed Good Sudoku activating the same brain-space as roguelikes in the vein of Spelunky or The Binding of Isaac. These are games meant to be played hundreds of times, and for thousands of hours. After years of playing Spelunky I immediately go into auto-pilot when starting a new run because I’ve seen so many permutations of the level generation I can’t help but feel as though surprise is unlikely. But that comfort with such a hostile environment has come from thousands of runs. I’ve died in Spelunky more times than I can count, and each death brings with it a small lesson for survival in future attempts. At this point, my head is crammed so full of strategies and techniques and possibilities that I feel more equipped than ever to survive the next run. I mean I probably won’t… but it’s nice to feel confident sometimes!! I haven’t played Good Sudoku, nor Spelunky or The Binding of Isaac, but I know enough about vanilla sudoku and these roguelikes to understand that Bigley’s observation is striking. I’m having of those, “How didn’t I see this before?” moments. To the uninitiated or uninterested, sudoku puzzles all look the same — a 9x9 grid with numbers sprinkled about. How many variations could there be? How is sudoku not a solved game? But the number of variations of sudoku puzzles is staggering — far more than any human could experience in their lifetime. Nigh-endless possibilities within a consistent environment. And what is the procedurally generated experience of a roguelike if not nigh-endless possibilities within a consistent environment? Thanks to Bigley, it’s now hard to think of roguelikes and procedural generation as something made possible with today’s technology, but something conceived from a 9x9 grid.
OPCFW_CODE
Below is how to say some basic colors in Spanish. Spanish can be difficult to learn, especially if you already know another language, but with lots of practice, you could perfect it. Tips for pronouncing vowels. A always, always, ALWAYS must sound as in the words sad, dad, fat, hat, art. E sounds like e in the words elephant, em (letter m), lEtter (as in the capital e), empty, end. I sounds like i in the words in, insist, infinitive, instead. O sounds like o in the words or, on, of, from, come. U has the sound of words with double o like book, look, cook, smooth, misunderstood. Also double l in Spanish has a particular sound more likely to letter j in the words jar, joy, jelly and even y in te words year, you, etc. Double l must always sound like j or y, never like l.No problem if you pronounce everything with a consonant "y", ( a very short "i" like the first "i" in "idiot"; you won´t get misunderstood. TIP. When a word in Spanish ends in "o" try not to make much emphasis in pronouncing the "o", it's a simple sound, not like "oh" but just "o": loco, moto, toro. 1Blue. To say "blue" in Spanish, say "azul". It is pronounced "AH-SOOL". 2Yellow. To say "yellow" in Spanish, say "amarillo. It is pronounced "A-MAR-EE-YO". 3Green. To say "green" in Spanish, say "verde". As in all of the Spanish language, the "v" makes a "b" sound, as in "bike", making them hard to distinguish. "Verde" is pronounced "BER-de". 4Black. To say "black" in Spanish, say "negro". It is pronounced "NE-gro". 5White. To say "white" in Spanish, say "blanco". It is pronounced "BLAN-co". 6Gris. To say say "grey" in Spanish, say "gris". It is pronounced "GREES". It sounds like "grease". 7Pink. To say "pink" in Spanish, say "rosado". It is pronounced "RO-sad-O". Don't forget to roll your tongue on the "r" (1). 8Brown. To say "brown" in Spanish, say "marrón". It is pronounced "mah-rrón" (1). 9Purple. To say "purple" in Spanish, say "morado". It is pronounced "moh-RAH-do"; Pronounce the "r" like the one in " rag ".
OPCFW_CODE
[desktop] proxy settings through environment variables Hi, I'm using keybase desktop package from Arch, and it's working at home and also at the office behind a corporate proxy, by setting the proxy configuration settings each time I change from/to no proxy, and from/t a particular proxy. I do connect to different proxies depending on what's convenient. For example if I need to remotely connect through VPN, I can choose the proxy to use that's closest to the VPN location, and when I'm in the office I usually use the local to the office one. Of course when not at the office and not connected through VPN, I use no proxy. Of course I do have to set the client proxy settings each time I change from environment, and that's not the best the client can do. There can be an option like "Use environment variables proxy settings", or "Use system settings", which would evaluate if the proxy environment variables are set or not. If set, then configure the proxy accordingly, and if not, then don't set any proxy. Notice Firefox for example provides "Use system settings" that work well with environment variables. I prefer environment variables given I don't use DEs like gnome or KDE with shells that update the environment variables upon configuration changes, but they do export such variables any ways. Signal for example as well uses environment variables. I set the following ones: use_proxy soap_use_proxy http_proxy https_proxy ftp_proxy rsync_proxy no_proxy USE_PROXY SOAP_USE_PROXY HTTP_PROXY HTTPS_PROXY FTP_PROXY RSYNC_PROXY NO_PROXY But if the client more likely would use other specific ones, that's fine, I can set them as well. This is more like a feature request, since as mentioned, the client is working behind a corporation proxy, and when not, but it could do so more conveniently. Thanks a lot ! Found it out... 1st, in order to make sure to use env. vars, without editing files, neither configs, 1st, one needs to stop using systemd stuff, including "keybase ctl start", which in turns calls "keybase service", but it does so through a systemd unit. So my recipe to avoid systemd stuff (I have scripts, but in brief): if [ -n "$HTTP_PROXY" ]; then export PROXY=$(echo $HTTP_PROXY | /usr/bin/sed 's?http://??' | /usr/bin/sed 's/[ \t\n][ \t\n]*$//') export PROXY_TYPE="http_connect" fi keybase ctl init keybase --use-default-log-file --debug service & kbfsfuse -debug -log-to-file & electron /usr/share/keybase-app & <<-- This should be Keybase instead if using binary from keybase I had to search into the source to find out the right env. vars: client-v5.1.1/go/libkb/proxy.go There you can find PROXY and PROXY_TYPE are the ones. And can try using them on systemd, but of course you'll have to edit files or do keybase configs for them to have effect, see the extensive and useful comments on "proxy.go". I would strongly suggest copying those comments (they're even written in markdown) on the documentation, or at least reference them on github, to take advantage of the source. Special attention to the difference between calling "keybase service" directly, and doing it through "keybase ctl start". The last call as mentioned before in turns calls the 1st one, but also as mentioned before, it does it through a systemd unit, so your env. vars. get lost... Closing, given there are env. vars. that can be used, but I'm afraid this is not very well documented, specially for the use case described.
GITHUB_ARCHIVE
Skip ahead to the next section if you just want to see some pictures About a year ago, I unfortunately had to turn down a job in Whitehorse with the Yukon Government. So to make sure that I didn't miss any postings in the future, I wrote a short R script to scrape the Yukon's job listings page. Every morning the script would run on my local Raspberry Pi server and see if there were any new job listings. If there were, it would send me an email with some basic information like title and department. In addition to emails, the script also saved new listings in an SQLite database for preservation. This is also how the script knew that a job post was new. It compared the scraped job postings to those already saved in the database. Just anecdotally, I noticed the daily emails stopped for a while around April. I guess recruitment was put on hold during the early days of Corona lockdown. Once the emails did resume, it was mainly medical positions, like nurses etc. Sadly, no suitable position popped up during the past year but, being a data analyst, I didn't want all the data I scraped to go to waste. So I thought it would be interesting to take a look at a few stats about Yukon Government job postings over the past year. Note: Just to preface these numbers, when I talk about a new job listing what I mean is a job listing with either a new ID or a new closing date. I chose this definition because I noticed that sometimes the same job listing would get re-posted with a new closing date. Also, I think some job listings can be for multiple open positions. So a new posting doesn't necessarily equate to only a single job opening. In total, I collected 440 job postings throughout 2020. I started collecting listings in late January 2020 until the end of December 2020. So not quite a full year but most of the postings from the beginning of January were probably gathered in the initial scraping. So I would say it's roughly the full year. 51 job listings contained the title 'nurse' or 'RN', by far the most common. This means over 10% of job listings are for different types of nurses. Become a nurse if you want an easy time finding work in the Yukon. 36 job postings were re-listings (same ID but different closing date). I don't know the specifics of why each posting was re-listed, it could be a lot of reasons. However, one reason might be that the position was unable to be filled and maybe this means that these are difficult-to-fill positions. Of these 36 re-postings, 7 (almost 20%!) were nurses or RNs. Did I already mention you should become a nurse if you want to find work in the Yukon? Several others were generally high-level positions like: directors, managers, supervisors, etc. And some were more specialized jobs like Infection Control Coordinator. It shouldn't be much of a surprise that the vast majority of job postings are for positions in Whitehorse, followed by Dawson City at a distant second. Job postings that contained multiple locations got counted for each city. So the percentages shown add up to more than 100%. There were quite a few job postings applicable to more than one location. Here we see the number of job postings aggregated by department. I left out sub-departments because there are too many and would clutter the chart. Similar to locations, departments are also quite skewed towards a few big ones. With Corona putting a freeze on a lot of hiring, then resuming only health-related recruitment, it makes sense that the Health and Social Services department had the most job postings in the last year. Highways and Public Works also had quite a few open positions this year. Education having so few job postings surprised me a little. I thought it would have been higher. It would be very interesting to see how this compares to previous non-Coronavirus years. All Job Postings Just for fun, below is a sunburst plot with all the job postings I collected grouped by department and then job title. Try clicking on a department to view all the job postings under them over the past year. Data and Scripts For anybody interested in the data I collected, or the scripts I used, visit the project on my GitHub. I have the job postings there in CSV format as well if somebody wants to take a look themselves. One thing I didn't collect (that I really should have, had I had more forethought) was the day each job posting was listed. It would have been cool to see recruitment of different department change drastically week-by-week during the lockdown. Thanks for reading!
OPCFW_CODE
Melaleuca is rich in snakes. Many of these snakes are tiger snakes (Notechis scutatus), and they’re often closer than you think. February in Melaleuca was hot, and as such, the local snakes were on show, lazing about on jetties, around the huts, and occasionally, directly underfoot on the paths. Tasmania has only three native snakes – tiger snakes, copperheads and the much smaller white-lipped snake, which is an attractive olive green, and occasionally scares you by hanging around at head height in dense scrub. Though they are all poisonous, and could potentially kill you if a bite went untreated, no-one has died from snake bite in Tasmania for decades. According to the Parks and Wildlife Service, “far more people die from ant bites, peanuts or spouses than snakes” – take from that what you will. Tiger snakes can be found just about anywhere, but in Melaleuca, there are at least two snakes who’ve worked out that nest boxes can be an excellent source of slow food. On two separate occasions, both on very hot days, we watched two different snakes attempting to make their way into the nest boxes outside the rangers’ hut. The first snake was too heavy to get into the little branches that would have brought it close to the baby birds, but the second snake, pictured above, was much smaller and nimbler. Four of us watched it for about two hours trying varying access points to get into the nest boxes, and Mark got a few photos, one of which is featured above. The snake was very persistent, and appeared quite certain that a decent meal was nearby, making us wonder if it had frequented the nest boxes before. The tree martins, who were occupying the nest, would occasionally flitter by anxiously, without making much impression on the patient reptile. However, once the superb fairy wrens arrived, it was on. A very fetching blue wren boy swooped the snake, saying something quite severe to it. This was quickly followed by a yellow-throated honey eater, a scrub wren, and a New Holland honeyeater. Rudest of all was a teensy grey fantail, apparently yelling something truly obscene in tiny-bird-speak as it danced back and forth within centimetres of the snake’s face. After two hours of fruitless tree-climbing, this avian rudeness seemed to finally break the tiger snake’s spirit. It backtracked back up the branch, chased by tiny birds all the while, then attempted a graceful descent down the main trunk. It did this by wrapping its tail like a slipknot around the trunk, before allowing gravity to take it, a foot at a time, down the tree. This worked fine for the first couple of metres, but then the snake appeared to lose its grip, and fell a couple of metres into the cutting grass below. The birds were placated, and went back to their regular business.
OPCFW_CODE
Internet Explorer Cannot Download Proxy.pac If this checks fails, it indicates that this is the first attempt to connect to the host during the current session and the normal proxy detection logic applies. In Revelation 19:16, of which kings is Jesus king? Example: if (isResolvable(host)) return "PROXY proxy1.example.com:8080"; isInNet() This function evaluates the IP address of a hostname and if a specified subnet returns true. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.Would you like to participate? navigate here Check simple rule exceptions first. I remember that we had the same problem when IE 8 came out and Microsoft initially tried to change the PAC file API -- > That change was reversed rather quickly Internet Explorer 11 seems to be picky regarding the MIME type returned with the proxy pac file from the web server. Thoroughly review and understand the PAC file before making changes. https://support.microsoft.com/en-us/kb/271361 What do I do? You're right, the setting definitely affects IE's caching of the pac file, but I believe (happy to restest), that it also caches the result of the auto proxy function FindProxyForURL(). Caching of proxy auto-configuration results by domain name in Microsoft's Internet Explorer 5.5 or newer limits the flexibility of the PAC standard. Click Start, click Run, type ncpa.cpl, and click OK. So IE8 seems to require a ipconfig /renew to acquire a new one, while IE7 requires a ipconfig /renew and a ipconfig /flushdns Also: Firefox only reported the alerts I had Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Popular Windows Dev Center Microsoft Azure Microsoft Visual Studio Office Dev Center asp.net IIS.net Learning Resources Channel 9 Windows Development Videos Microsoft Virtual Academy Programs App Developer Agreement Windows Insider Program This may be configured via the network.dnsCacheEntries and network.dnsCacheExpiration preference variables. Edited by king2201 Wednesday, November 27, 2013 12:42 AM Wednesday, November 27, 2013 12:24 AM Reply | Quote 0 Sign in to vote Same here. Complement of CFL is Recursive Safety - Improve braking power in wet conditions Wrong way on a bike lane? Example: if (dateRange("JAN", "MAR")) return "PROXY proxy1.example.com:8080"; else return "DIRECT"; timeRange() Can be used to specify different proxies for a specific time range. Source CERN. A common exception is for internal networks. Blog at WordPress.com. Example: if (isInNet(myIpAddress(), "10.10.1.0", "255.255.255.0")) return "DIRECT"; dnsDomainLevels() This function returns the number of DNS domain levels (number of dots) in the hostname. This helps the community, keeps the forums tidy, and recognises useful contributions. RegretullyI need IE 11 for Sharepoint datasheet view. Comment the code consistent with programming best practices. How do pilots identify the taxi path to the runway? Therefore, make sure that you follow these steps carefully. Also if the proxy.pac file has an error it decides to die silently. Shame, as that was a very useful way of seeing what was happening during processing. If you cache proxy auto-configuration results by domain name in your browser (such as Microsoft's Internet Explorer 5.5 or higher) instead of the path of the URL, it limits the flexibility I'm very satisfied to peer yur post. Many websites now use content delivery networks which may provide content from several different hosts, thus the delay could be significant for larger websites; each host is requested in serial rather Would we find alien music meaningful? How do I specify a URL in a PAC file to bypass Content Gateway? where you use a "file://....." URI/URL, is not supported and has been deprecated. Click a question or problem in order to view the solution. Please review the Java open issues in the release notes for the versions of Java used by your client browsers. At least for the time being, I will have to advise my users NOT to upgrade to IE11. The browser fetches this PAC file before requesting other URLs. A side issue is that we need to point Firefox to the proxy.pac file manually to make it work for Firefox. My PC was able to use the PAC file on IE8 and IE10. The Chromium ProjectsSearch this site HomeChromiumChromium OS Quick linksReport bugsDiscussSitemap Other sitesChromium BlogGoogle Chrome ExtensionsGoogle Chrome Frame Except as otherwise noted, the content of this page is licensed under a Creative What is Internet Explorer Automatic Proxy Result Cache? Alternatively, you need to disable caching of proxy auto-configuration results by editing the registry, a process described by de Boyne Pollard (listed in further reading). Note Web clients using Internet Explorer pick up the settings in this GPO the next time that group policy refreshes, which by default is every 90 minutes for clients, and every If not, investigate the web server serving the file. I have multiple network adapters and the myIpAddress() function is returning an undesired IP address. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Use the arrow buttons to change the order. Don [doesn't work for MSFT, and they're probably glad about that ;] Saturday, January 30, 2016 10:11 PM Reply | Quote 0 Sign in to vote Hi, all, i had the All rights reserved. Such entries are often referred to as exceptions.
OPCFW_CODE
Dear OFL friends and reviewers, We're pleased to finally announce the completion of the SIL Open Font License version 1.1. This free and open license has been updated to improve clarity, remove potential ambiguities, and make it easier to use for both authors and users. Visit the OFL web site for more information: A detailed list of changes can be found on the review page: The only notable change in usage is that authors must now explicitly list any names that should be Reserved Font Names. The original name of the font is no longer reserved by default. Thanks to all of you who have helped us refine this license and make it even easier to use and understand. Victor Gaultney & Nicolas Spalinger The OFL is a free and open-source license specifically designed for the licensing of fonts. Note: Here we describe a workaround. The proper solution is to fix the graphics drivers and the X.Org X server. Such work is taking place, and for several cases you do not need this workaround. Especially with newer versions of Linux. You just installed your 3D Linux desktop and you are really enthusiastic about it. But when you try to play some videos, you get a strange black output. What's going on? The common software video players that come with the Linux desktop are able to display the video stream to several types of output devices. This includes several types of output for the graphical interface, and also obscure output devices such as text mode, using ASCII characters. The default output device is XVideo (or Xv) for players such as those based on GStreamer (totem) and VLC. As you guessed, there is a bug with XVideo when using Beryl/Compiz. Therefore, to fix, you need to switch to another output device that works. For GStreamer players (such as totem, the default movie player in GNOME, Ubuntu and so on), you need to run from the command line the command (with older distributions such as Ubuntu 6.06 there is an option in System/Preferences for this). Video, then for Default Video Plugin choose X Window System (No Xv). Click on test to verify that it actually works. Click Close and you are set. VLC is not installed by default in Ubuntu 6.10. You need to install manually using the Synaptic Package Manager (under System/Administration), once you have activated the Universe repository in Repositories. Start VLC and click on Settings, then Preferences. Expand Video and then expand Output modules. You will notice several options for output device. How do we actually choose which one should be the active output device? Well, it appears it's a bit tricky. Select the item Output modules, and notice the checkbox at the bottom right that says Advanced options. Check the box, and now you have the option to select a different output device. Pick X11 video output, click on Save and you are set! Hard disk boot sector invalid When you get this error when you boot your computer, you know something is terribly wrong. Actually, in most cases it's not. What probably happened was that you did not have set a bootable partition on your booting hard disk. How can one not set a partition as bootable? It can happen when you install a fresh Linux distribution using the manual partitioning option, and you shamelessly forgot to toggle the bootable flag on your Linux partition. However, this implies that you already got rid of WXP, so you are totally excused. Another reason for the bootable flag missing is that you have erased the said partition and recreated it. How do you set the bootable flag again? You can boot with a installation CD/rescue CD and set it using the partitioning tool. There is no need to install again. Since you already have Linux installed, you can boot with the Ubuntu installation CD and choose the last option, Boot from first hard disk, to boot of Linux. Then, use the distribution partitioning tool to set the bootable flag. Update 23Feb07: The bootable flag has to be set on one of the primary partitions. It does not work if you set the bootable flag on a logical partition. Apparently the above error message comes from the BIOS, which blindly does an unneeded check to see if a primary partition (whatever primary partition) has the bootable flag set. The primary partition you set the flag on does not have to be a partition that you can boot from. Any primary partition will suffice. Once the BIOS relinquishes control to the MBR, GRUB takes over and brings you to your Linux distribution. Do you want to check how many hours you have been using your computer/laptop? Do you want to find out if that second-hand hard disk salesman is saying the truth? Are you about to buy a second-hand laptop that had been used only sparingly? You can figure out what's going on, with the help of your Linux box and the smartmontools package. Especially since I decided to keep my old hard disk that sits next to me. Modern hard disks support a feature called Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.), which helps make them more reliable. One of the data recorded by S.M.A.R.T. is the total number of hours a hard disk has been in operation. This is the S.M.A.R.T. attribute 09 called Power-On Hours (POH). When your computer is on, your hard disk is on as well, therefore you can get the total number of hours your computer has been on. Let's see how we put all of these in action. You need to install the smartmontools package, available from the standard Ubuntu repositories. - Start System/Administration/Synaptic Package Manager and search for smartmontools. Select the package for installation and click to Apply. - Assuming your hard disk corresponds to device /dev/hda, run the command sudo smartctl --all /dev/hda on a terminal window. You will get a long list of information and attributes. Wade through the output and notice the attribute list and the line with ID 09. On my system it is ID# ATTRIBUTE_NAME ... UPDATED WHEN_FAILED RAW_VALUE 9 Power_On_Hours ... Always - 24 Here you can see that this hard disk has been in operation for 24 hours in total. Yes, it's a new hard disk. If your hard disk is a bit exotic, you may see a strangely large raw value. Other manufacturers measure the time in minutes or seconds, so you need to convert accordingly. Other information you may extra from S.M.A.R.T. include the temperature of the hard disk. The temperature has ID 194. For me it is ID# ATTRIBUTE_NAME ... UPDATED WHEN_FAILED RAW_VALUE 194 Temperature_Celsius ... Always - 41 That is 41 degrees Celsius. You can also perform self-tests on your hard disk in order to check if it is about to fail. In S.M.A.R.T. terminology there are short (1min duration) and long (30min duration) tests, and the last five results are saved in the hard disk non-volatile memory. The entry includes the number of hours the hard disk has been in operation as explained above. Therefore, when you loan a laptop to a hard working person that has to finish an essay, you can perform a test so that the current number of hours are recorded, and then perform another test when you receive it back. If you are said hard working person, leave the laptop on as much as possible. Apparently, most USB/Firewire caddies/enclosures do not pass the S.M.A.R.T. information, therefore you cannot access the relevant attributes. You need to connect the hard disk on the IDE/SCSI/etc channel.
OPCFW_CODE
Since most Active Directory administrative tools have been realized as MMC snap-ins, all of them have similar interfaces and basic features. Knowing these features allows you to use all of these tools in the most effective way possible, and to optimize them to fit your specific tasks. Sometimes, a snap-in's design and features may even affect some aspects of deploying Active Directory in an enterprise (see a bit later in this chapter "Choosing Columns for Displaying"). Let us start by discussing administrative snap-ins, taking into consideration some common features of snap-ins. Most standard administrative tools can be started from the Start | Administrative Tools menu, or can be added to a custom MMC console. Such tools as the Active Directory Schema Manager snap-in or the Group Policy Object Editor snap-in should always be initially added to an MMC document: Enter mmc in the Start | Run window. Press <Ctrl>+<M>, or select the Console | Add/Remove Snap-in command. Click Add in the window that is open. Select the desired snap-in in the Add Standalone Snap-in window, and click Add. You can repeat this step for all the snap-ins you need. Then in turn click Close and OK. Save the resulting console with any name. Making your own administrative console may have some valuable advantages: You will have on hand all the instruments you want, which will be configured to your discretion. For example, you may have snap-ins connected to different domains, or Group Policy Object Editor snap-ins linked to various GPOs. There will be more options for configuring and customizing snap-ins (see in this chapter "Customizing Snap-ins"). The computer's memory is used more efficiently. A number of tools started separately allocate considerably more memory than the same tools added to a single MMC console. On Windows .NET-based domain controllers (unlike those in Windows 2000), all administrative snap-ins can be opened in the "Author" mode (right click on a snap-in's name and select Author in the context menu), which allows you to reconfigure these tools (add new snap-ins in the same MMC document, etc.). While working in a snap-in window, don't forget about such simple but timesaving web-style features on the Standard toolbar as the Forward and Back buttons, the Up one level button, and the Refresh button. When pointing to an object, you can view its properties either by selecting the Properties command in the context menu, or — to do it faster — by clicking the Properties button. When working with different Active Directory objects, it is possible (and may be very helpful) to display more fields than just the three default ones, or to delete unnecessary ones. Select the Add/Remove Columns (in Windows 2000 — Choose Columns) command in the View menu, and add or delete the necessary columns in the Add/Remove Columns window (Fig. 7.1). Each object will have its own set of fields. Fig. 7.1: Choosing necessary object attributes to be displayed In Fig. 7.1, note that in MMC v.2.0 you can move any item to the beginning of the Displayed columns list. In Windows 2000, the Name item is always at the top. When the Active Directory Users and Computers snap-in is used for creating new users, the Full Name field is generated as a concatenation of the First name and Last Name fields. The Full name field, in turn, determines the value of the cn attribute. (You can, however, change this order, if you like — see articles Q250455 and Q277717 in the Microsoft Knowledge Base.) You may want, for some reason, to use proprietary naming conventions in your organization. (This can be easily organized by using scripts or batch tools, such as LDIFDE or CSVDE. Manual manipulations are also possible.) For instance, you may wish for the cn attribute (i.e., the Full name field) to have the same value as the sAMAccountName attribute (the Pre-Windows 2000 Logon Name field) or as a proprietary ID code. Sometimes, the Windows 2000 version of the Active Directory Users and Computers snap-in does not sort a container contents on some columns. (In Windows .NET, this is not an issue.) You can use the Find Users, Contacts, and Groups window rather than the main snap-in's window. This window allows you to sort rows according to the contents of any column. "Hide" the Name column from view and rearrange the columns in the order most useful for you. (It is not possible to remove this column in the main window. Moreover, in Windows 2000, this column must always be first.) Click Browse to view the forest tree and go to any location (then click Find Now). You can select the most appropriate of the two windows depending on your requirements. To document the objects stored in Active Directory, you can export any currently displayed list into a file for processing or printing from the Word or Excel applications. Point to a container or an object and click the Export List button, or select the Export List command in the context or Action menu. You can choose between tab-separated (.txt) and comma-separated (.csv) formats. CSV files are easily imported into Microsoft Excel documents. Standard configured administrative snap-ins lack certain useful features that are realized in Microsoft Management Console (MMC) technology. These features are common for all MMC consoles, and there are many reasons why using them in the administrative tools allows an administrator to save a lot of time and effort. In Windows .NET, this feature is implemented in a slightly different way than it is in Windows 2000. (Keep in mind that the Windows 2000 systems use the MMC version 1.2, whereas the Windows XP/.NET systems use the MMC version 2.0.) In a custom MMC v.1.2 console, the Favorites tab will appear near the usual Tree tab. A MMC v.2.0 console has the Favorites command on the main console menu. You can browse Active Directory in a web-like style, and save the pages you'd like to access later. Point to any container in the Tree pane, and select Add to Favorites in the Favorites menu. This feature can be very helpful in large domains that contain many OUs and other objects. Notice also that any container in Active Directory that can be viewed in different snap-ins can be designated as a favorite; it will be placed in the same list of favorites. You can, for instance, simultaneously have main OUs from different domains, authoritative DNS zones, DHCP scopes, site connections, etc., all on the Favorites tab (or on the Favorite menu—in MMC v.2.0). Do not forget about traditional browsing features, such as the Back, Forward, Up one level, and Refresh buttons. An administrator may create specialized taskpads for him—or herself (for some routine tasks) as well as for users that need to carry out certain (limited) tasks, or for subordinate administrators to whom control of some OUs or objects is delegated. Let us discuss an example of how to create a taskpad for administering organizational units. This taskpad will allow us to view all accounts in an OU and perform three predefined operations: create a computer, user, and group. Select an OU in the Active Directory Users and Computers snap-in, and click New Taskpad View from the Action menu. The New Taskpad View Wizard will be started, which will guide you through all necessary steps. At any step of wizard working, you can go back and change the selected options or entered information. Leave the default options in the Taskpad Display and Taskpad Target steps unchanged. This means that the tab of the created taskpad will appear for each OU in the domain (but not for other domain containers!). Enter the necessary information at the Name and Description step. When the wizard has finished (i.e., the view without task buttons has been generated), check the Start New Task wizard box in the last window and click Finish. The New Task Wizard will start. The default Command Type is Menu command. In the Shortcut Menu Command step, select Tree item task in the Command source list (Fig. 7.2). In this case, we will be able to choose the commands for the entire OU. First, select New->Computer. Fig. 7.2: Selecting the source of the commands for the new taskpad At the next step, enter a relevant task name, and a description for this task. Then you can choose a graphical representation (icon) for the task. A new task has now been created. To add the other two commands, check the Run this wizard again box in the last window of the wizard, and click Finish. The wizard will start again. Repeat the necessary steps, the first time selecting New->User, and the second time—New->Group. Fig. 7.3 shows an example of a taskpad created according to the described procedure. Fig. 7.3: An example of a taskpad You may add/delete tasks, and/or change the properties (options) of a taskpad by selecting the appropriate tab and clicking Edit Taskpad View in the Action menu. It is possible to define commands (tasks) for an entire container as well as for an individual (selected) object in a container. While browsing the object tree, only those commands that are acceptable for the selected object will be enabled in a taskpad.
OPCFW_CODE
mailing list archives CVE Request - SA-CORE-2013-001 (one JQuery X < 1.63 issue and two Drupal modules issues) From: Jan Lieskovsky <jlieskov () redhat com> Date: Thu, 17 Jan 2013 10:50:49 -0500 (EST) Hello Kurt, Steve, Forest, Drupal Security Team, vendors, @Forest: Apologize for requesting CVE ids instead of you, but I will explain the reasons below shortly. Drupal upstream has released Drupal 6.28 and Drupal 7.19 versions, correcting multiple security flaws: * Issue #1 - Cross-site scripting (Various core and contributed modules - Drupal 6 and 7) * Issue #2 - Access bypass (Book module printer friendly version - Drupal 6 and 7) * Issue #3 - Access bypass (Image module - Drupal 7) as shipped within Drupal, the original XSS JQuery upstream report is here: with mention about the fix in JQuery 1.6.3 version here: After further look the same issue needs to be fixed also in drupal7-jquery_update: and python-tw-jquery packages: Also python-tw2-jquery package: seems to ship various embedded versions of the jquery.js library implementation. Since there might be more of the components / packages, shipping the vulnerable JQuery version the first CVE identifier should be allocated to the original @Drupal security team - could you clarify if to fix the first issue, there was yet some other Drupal specific patch / change (besides the JQuery library update), which would require yet another (fourth) CVE id to be allocated? @Mitre CVE assign department team, could you clarify, if you have already assigned CVE identifiers for these issue and if so, for which source code base it was? If Drupal upstream just updated JQuery version to not-vulnerable 1.6.3 [B], [C] within Drupal core, then three ids are sufficient (one for JQuery, one for Drupal Book module issue, one for Drupal Image module issue). On the other hand, if there was yet some Drupal specific patch (besides JQuery update) needed to fix #1 issue - four CVE identifiers should be allocated (after my understanding). Could you allocate them / if allocated already, let us know the particular ids and which source code they were allocated for? Thank you && Regards, Jan. Jan iankko Lieskovsky / Red Hat Security Response Team - CVE Request - SA-CORE-2013-001 (one JQuery X < 1.63 issue and two Drupal modules issues) Jan Lieskovsky (Jan 17)
OPCFW_CODE
Link to fixed post ; http://www.reddit.com/r/IAmA/comments/t1wvy/fixed_with_proof_i_nearly_killed_myself_when_i/ I have no idea if people are interested in this, if not, then I won't mind a flood of downvotes. I just know and have seen people who're curious about how things actually are in a mental asylum, because most of them still have the 'one flew over the cuckoo's nest' image. I would have died if my seizures following my suicide attempt wouldn't have been so loud, it woke my parents and they found me and called 911. I'm better now, my life is okay, don't worry. I had been bullied my whole life and found that life was actually boring and useless. I still think it is, but I manage to make the best out of it. I was forced, after a night in the hospital pumping the medicine out of my stomach, to go to a mental asylum ward or whatever it's called in English. A youth one, of course. For people from 12-18, though sometimes there would be 10-year olds who were really problematic or 'crazy'. At first I needed to spend 2 weeks in the closed off section, for people who 'were harming to their surroundings and/or themselves'. Then I needed to stay 3 weeks extra because they couldn't get through to my personality. Then I was transferred to an 'open' section, the building had like 8 of them, and mine was for people from 12-18 who needed structure. It was quite fun, actually. I made friends, learned how to NOT dangle at the bottom of a social hierarchy, but of course I witnessed a few less pleasant things as well. For question related things; I've ONCE been into isolation because of 'suicidal' behaviour, and I've seen people having psychoses and throwing autistic fits etc. I've been to a school that was attached to it as well, with really cool and interesting, but troubled class mates. After a bit more than a year, I got out, rebuilt my life, and although it's still not great, I'm managing. I'm almost 16 now. TL;DR: I ended up in an asylum and witnessed interesting things as I made progress through therapy and social interaction. So, if anyone's interested in the way things are in an asylum/therapy/special school, AMA? EDIT: WOWOWOW. I was gone for the day, I didn't know I'd get so many attention. Going to answer now, sorry. I'm not a troll, but I wonder how I'd provide proof? EDIT2: I'm a girl. Again, how to provide proof? Tell me.
OPCFW_CODE
WindowsNetworking.com Monthly Newsletter of July 2010 Sponsored by: Softinventive Lab Welcome to the WindowsNetworking.com newsletter by Debra Littlejohn Shinder, MVP. Each month we will bring you interesting and helpful information on the world of Windows Networking. We want to know what all *you* are interested in hearing about. Please send your suggestions for future newsletter content to: firstname.lastname@example.org I just completed an article about the consumerization of IT and its security implications for our sister website, WindowSecurity.com. Look for it to be published soon. But in the meantime, security isnít the only problem that the consumerization trend has created for network administrators. I read recently that during the recent World Cup, the "March Madness" NCAA basketball tournament and other popular sports events, employees watching web video of those games in the workplace brought some networks to their knees. Workersí expectations that they will be allowed to use company computers - or their own devices, plugged into or wireless connected to the company network - for a certain amount of personal use have gradually crept into the corporate culture. The employee sees it from the perspective of ďIím not behind on my work and Iím on my fifteen minute break or my lunch hour, so whatís the difference between me watching the game (or the YouTube videos of my grandkids) and the employee who spends his/her down time standing around the water cooler, gossiping about the boss, or in the lounge, watching the news on TV"? In a nutshell, the difference (along with security) is bandwidth. Employees A, B and C might be on break, but if they spend that break streaming high-bandwidth videos, how does that impact employees D, E and F who are trying to get real work done and whose connections are slowed by the congestion? This might not be a problem when weíre talking about three employees, but multiply that by several hundred or several thousand who are taking their lunch breaks at the same time and those who lunch early or late might find themselves sitting and waiting when theyíre trying to get a rush project done. Many employees who wouldnít use the companyís computers to do their personal business think nothing of using its network. They bring in their smart phones, laptops or iPads and then connect them via wi-fi to the company network. Itís a lot cheaper than buying a high priced data plan from a cellular provider or, if you do have such a plan, using up some of your precious data allocation now that some carriers are eliminating their unlimited plans. Some of them may not have fast Internet connections at home so they bring their laptops to the office to connect and download those big files or watch online video that doesnít work well over their slower connections.They figure theyíre using their own devices and ďonlyĒ using the company network to get to the Internet, so no harm is done. Streaming media is a bandwidth hog that some companies didnít take into consideration when they formulated their employee-friendly usage policies. After all, it all started with employees wanting to be able to check their personal email when on break, something that (unless large attachments were involved) didnít use much bandwidth. If your corporate network has bandwidth to spare, no problem - but some small and medium size businesses are already straining the limits. If you pay for Internet usage on a metered basis, the bandwidth hogs cost you money. Other consumer oriented applications, such as peer-to-peer (P2P) file sharing and multi-player online games, are also big bandwidth users. Itís not just consumer apps and devices that consume excessive amounts of bandwidth, though. Some legitimate business applications are being misused or overused, resulting in strains on the company bandwidth. For example, sometimes when a company implements a new technology such as video conferencing, they go overboard with it and start using it when itís not necessary or even desirable, just out of fascination with the ďnew toy.Ē Firing up a video conference for every communication that could be just as efficiently handled by a phone call is a waste of bandwidth. You can, of course, filter particular web sites or block certain protocols on the corporate network.You can use an edge device, in which case you should have one that can identify and report users for all protocols by requiring authentication at the gateway, such as Microsoft TMG and some (but not all) other edge firewalls. You can also block apps/protocols on each computer. You can either block protocols or applications completely or you can use bandwidth shaping to allocate (a.k.a. traffic shaping or packet shaping) bandwidth by giving priority to protocols that are more mission critical. Windows 7/Vista and Windows Server 2008/R2 include support for bandwidth shaping via policy-based Quality of Service (QoS). There are also numerous third party solutions for traffic management. Windows QoS is built into Group policy, and with it you can control network usage based on applications, users and computers. You can set policies to prioritize traffic according to values within the Type of Service field in IPv4 packet headers and the Traffic Class field in IPv6. You can configure a user-based policy on the domain controller and propagate it to the userís computer, no matter where or how the user logs onto the network. To find out more about policy based QoS, follow this link. An edge device is most effective if you want to block everyone on the network from using the specified Internet protocols or applications. In some cases, however, there will be some legitimate business use (even for YouTube). Then youíll need to either block by user or use the traffic shaping method to prioritize bandwidth use. Of course, the technological solutions are only part of the solution. You need an acceptable use policy that addresses the excessive bandwidth consumption problem, as well. Heavy-handed policies that attempt to completely ban all personal use often backfire; the key is to set reasonable rules and to educate employees as to the rationale behind the restrictions. People are much more willing to accept and support rules that they understand. By Debra Littlejohn Shinder, MVP 3. WindowsNetworking.com Articles of Interest How to hide the Public shortcuts on the folder and favorites list Removing the shortcut from the Favorite Links is easy. Just open your Links folder: C:\Users\username\Links. Then delete the Public.lnk. Taking the shortcut off of the Folders List, however, requires a registry change. You need to delete the following registry key: For more administrator tips, go to WindowsNetworking.com/WindowsTips Something Iíve learned from talking to a number of people who have rolled out DirectAccess in their organizations is that some of the wireless carriers are not allowing IP Protocol 41 over their networks. I have no idea why they arenít allowing this, but itís causing a problem with wireless DirectAccess clients who need to use 6to4 when assigned a public address when connected to the Internet. Whatís the solution? Well the fact is that while 6to4 is the default used when connecting over the Internet, that doesnít mean you have to use 6to4. Instead, you can use Teredo or even IP-HTTPS. This monthís tip is that you disable the 6to4 IPv6 transition technologies throughout your network. You can do this via Group Policy. This also solves the problem with 6to4 when you use public IP addresses on your intranet, something you see sometimes in large corporate networks and in academic networks. Itís safe to disable 6to4 and doing so will save you a ton of trouble. I know that you get a lot of DirectAccess questions and I will understand if you do not want to answer this one in the newsletter. But I went to TechEd in New Orleans this year and saw a lot of the talks on DirectAccess, including two of them that your husband did. DirectAccess really looks like the answer to a lot of problems we have had in our company regarding VPN and user productivity when theyíre out of the office. My boss thinks itís a great idea and his boss has a friend who is already using it and he thought it was fantastic! So now itís my job to figure out what I need. Right now my network is using Windows Server 2003 domain controllers and we have a mix of Windows 2000, Windows 2003 and Windows Server 2008 servers. We do not have any Windows Server 2008 R2 servers and we do not use IPv6 on our network. Our client machines are mostly Windows XP, but weíre planning on moving to Windows 7 by the end of the year. Do you think that DirectAccess will work for us? Thanks! - Donny K. Great question. Any chance you saw me at TechEd? I was at the Remote Desktop Server booth and I got to meet a lot of great people there. If not, I hope to be at TechEd in Atlanta next year so maybe weíll cross paths there. Overall, I think you're in great shape for providing your users DirectAccess connectivity to your corporate network. Let me know if you have any problems getting things set up and Iíll make sure to connect you to the right resources to make things go as smoothly as possible.
OPCFW_CODE
True, most if not all myths contain a bit of truth, but the key word is "bit". Myths you heard about: That you need to "sell" Enterprise Architecture. Do you? For one, the concept is straightforward. Anyone understands a plan or a city map and everyone wants a "schematics" of the Enterprise. EA is self selling. But are those loose IT diagrams you are going to provide to your business really delivering the sold benefits? Selling promises you may not keep will not help your effort or the EA. Can you deliver on your promises? That EA is about strategy. What about documenting the day to day operation of the Enterprise to reduce duplication and unnecessary complexity, to improve and optimise the Enterprise, what about describing the key processes and technology to enable tactical change. Strategy existed long before and will survive without EA. The EA is the Enterprise blueprint for managing complexity and change. It enables strategy specification and execution but it is more than strategy. EA is related to many other Enterprise disciplines but it does not replace them, it just works along them. That you have to do the To-Be first. How would the target Enterprise state look should it be not anchored in the reality of the current state? Do you suggest discarding your present systems and processes? Are you committed to a revolution, i.e. starting from tabula rasa and abandoning current investments, rather than to an evolutionary path departing from the existing state? At least, you get rid of the gap analysis phase. And soon after, when the To-Be becomes the As-Is, would you be ignoring it as well, in the strategy development cycle? That you need an Architecture Review Board. You definitely need a decision forum to sanction process, architecture and technology change. But the EA strategy will be the Law book for their judgments. Without the EA the forum cannot properly function. Care should be taken though not to duplicate the tasks of the existing governance boards. That architecture principles are key to EA. In my mind, there are just a handful of useful architecture principles. And those are already known to whom it matters. Any more than that and we would have a problem employing them. Architecture Principles are there to guide our architectural choices and design. Usually they come mingled with principles that have little to do with architecture. How are you going to use the principle, for what decision or purpose? This is a question that you should answer before producing "the principles". Otherwise they would be quickly buried in the graveyard of EA artifacts that no one ever uses. That we do not need an EA framework. How do we make sure we deliver an EA, the same EA? How can our effort be predictable and replicable? Does every EA architect need to concoct an own framework? An EA framework consists of a frame/chassis on which the EA artifacts mount on to construct the whole. The whole can be broken into parts as such, that can be documented or designed independently. It helps us manage complexity. But if you don't have a proper or a single one then you may have to go on your own. An extended framework would consist of the development process and governance best practices. That we have EA frameworks We have a process framework like TOGAF (found useful by IT folks unaware of project management good practices), a cognitive EA approach such as Zachman's and its clones, and other frameworks often consisting mainly of acronyms such as POLDAT. Also there are specialized frameworks for the public sector (FEA), not generic enough to be used the business sector and design methodologies like D/MoDAF... for the defence industris and so on. But it is still a matter of debate if they returned results or are suitable to your purpose. Imagine a manufacturer having to produce a car using a process like TOGAF or a framework like Zachman! They don't even describe the parts of an Enterprise. That EA is about business rather than IT. That's true, in theory, but, in practice, most if not all EA groups are part of IT and are tasked to deliver a technology EA. Nevertheless, without the business view, the IT architecture explains little and has a very diminished audience. Which is what happens. That the Cloud has an impact on EA It does but no more than anything else. The business blueprint is about the same except that you don't care about the insides or the How functions are implemented. The infrastructure is abstracted in terms of servers, storage and networking. The applications becomes a business service. Indeed, you have to integrate the business service into your landscape. Here is a framework called GODS - FFLV. GODS stands for and describes the key parts of an Enterprise (Governance-Operations-Development and Support) while FFLV illustrates the components of the framework: Functions, Flows, Layers and Views. All integrated in one. It comes with a metamodel, functions and flows maps, Business, Technology and People layers, IT technology architecture template, development process, governance best practices. And many more.
OPCFW_CODE
A Letter From Paulo Trezentos on the Goals for AppCoins for 2019 The beginning of the year is always the best time to assess all of the progress that has been made so far by the AppCoins team. It’s always useful to start by identifying where we have performed well, and where we didn’t. This is truly the only way we can learn and improve our work during 2019. That said, I would like to highlight our top three most significant achievements, and also, mention the biggest challenges where we could have done better: On a positive note, we as team were able to deliver on the following three points. 1| Traction On User Side The successful integration of the protocol with the Aptoide app store and its over 200 million user base, has resulted in a very interesting bootstrap of the technology when it comes to in-app purchases and rewards for users. As shown in APPC explorer, user adoption started to take off in September reaching a significant peak in December. During the month of December, almost 10.000 in-app purchase transactions took place, and more than 700 users were rewarded. The fact that the team was committed to achieving all of the milestones that had been set previously, and complete them in the established dates, certainly contributed to these excellent results. 2| Meeting Deadlines And Keeping Crypto Holders Updated The AppCoins and the Aptoide teams were always able to meet the deadlines that were established in the white paper, and were even able to anticipate some of them (e.g., Ada Release). In parallel, we made it mandatory to keep the APPC token holders updated at all times. Take ANU, the biweekly update, as it has never been delayed — not even for one day — in the 25 editions of 2018. The community team and Telegram administrators have done a fantastic job. We listened to you and we acted on it. 3| Developing The Best Blockchain Technology Finally, I am very pleased to say that we worked hard, and did a great job when it comes to developing the best blockchain technology to support the AppCoins flows. We know the APPC holders had very high expectations in what could be delivered, and the engineering team has excelled in what has been done. Due to the difficulty of finding a balance between what is the state-of-the-art of blockchain and what the Trust needed in the app economy, we proposed a foundational Trust model, and later executed it in AppCoins and in the AppCoins Credits deploys. Finally, the team is also glad for achieving what can be described as a maturity level of the AppCoins Wallet. The wallet has been widely downloaded from the main Android app stores (Aptoide and Google Play) and has been praised by its elegant aesthetics and user experience which allows all major operations to be performed with AppCoins, with minimum friction. Not everything has been positive. We think we could have done better in three main areas. Read more about them below. 1|The Adoption By Android Developers Was Lower Than Expected — 57 unique apps integrated IAP in December (source: APPC explorer). Even with a dedicated platform that could be used by the developers, the migration path was too complicated, and from the 3.000 developers that signed up to the platform, the number that released their apps with IAP was low. We identified the friction point (IAP SDK migration) and started to develop tools to enable an automatic migration. In a few weeks, we will release them together with a new way of making payments with AppCoins (One-Step Payment), as well as a rebranding of the Developers platform. 2|The Developer’s Reputation System Is Still Under Development Creating a Developer’s Reputation System was one of the goals of the AppCoins white paper. After speaking with the Developers community, we felt that this would have to be a medium-term implementation since it needs the other two flows — In-App Purchases and User Attention Rewards — to take off first. The reason for this dependency comes from the fact that without a critical mass of IAP and Rewards, there isn’t enough information to build the Developer’s Reputation. This will be easily resolved once we reach above one million monthly transactions. 3| Brand Recognition And The Crypto Community Reaching the crypto community was quite underwhelming given the potential of AppCoins. AppCoins is currently ranked as #400 in CoinMarketCap (Jan 8, 2019). This position is too low when we think about what can be achieved in a market that is estimated to be worth around $122 billion USD per year in 2019, with the #1 Crypto Project in Apps Billing. We have to do better when it comes to brand recognition and reach out. This is something that our team is working towards as we speak. Ultimately, the demand created by tokens purchases (the result of the IAP curve growth) will drive our position up in CMC. This is a good goal for us in 2019. Goals for 2019 If 2018 was the year where we developed the foundations of the technology and tested the network incentives (users and developers), 2019 will be the year of the bootstrap. Volume and scale will be guiding our efforts. As previously stated, we’ll devote extra attention to the Developers and AppCoins SDK integration. In addition to that, in 2019, the partnership between Unity and AppCoins will kick-off with an easier way for every Unity developer to integrate AppCoins. At the same time, migration tools from Google Play SDK to AppCoins SDK will be available. When it comes to the user experience, one of the major features that will be available in 2019 is the Top-Ups of AppCoins Credits and Peer-to-Peer Transfers. We envision that the possibility of buying AppCoins credits and then transferring them to another wallet (a wallet belonging to a family member or a friend that doesn’t yet have a credit card) will bring unbanked people to the apps economy. Today, 95% of the users still don’t buy in-app products, and we want to change that. Finally, we’ll reduce the purchases and rewards friction flows to make sure that every Android user will also be an AppCoins user. Thank you all for believing in our vision and supporting us! We’re all already proud blockchain pioneers, but our ambitions won’t stop here. Together, let’s make 2019 a remarkable year for AppCoins! After all, this will be the “bootstrap year.” Here’s to an incredible year together. Paulo Trezentos, AppCoins CEO As always, you’re invited to follow our work regarding all of the products we’re working on:
OPCFW_CODE
Higher resolutions causes applications with the chromium and firefox rule flag to have a border gap Seems like the higher the monitor resolution the bigger the gap between the appbar and the window becomes. This monitor 1 at 3440 * 1440 This monitor 2 at 1920 * 1080 This monitor 3 at 2560 * 1440 If I take monitor 3 and lower the resolution to 1920 * 1080 the gap goes away complete. When using firefox I only get a gap on the top most edge below the appbar when the resolution is about 1920*1080. For the left, right, and bottom edge I see no gap at all at any resolution. When testing out the chromium version of Microsoft Edge, with the chromium rule set in my config. I see no gap at all on both monitor 1 (3440 * 1440) and monitor 2 (1920 * 1080), but a slight gap at the top on monitor 3 (2560 * 1440). Left is Firefox and right is Edge Adjust monitor 3 to 1920 * 1080 fixes it for both Regardless of the browser and resolution I only see a gap at the top, never on the sides or bottom. Also looking at the first set of screenshots I think why monitor 1 doesn't have a gap at all with Edge is because for some reason the gap on Edge is smaller the Firefox so it is able to fill the smaller gap in comparison to monitor 3. I've been checking this out a bit and I think a fix will likely come from src/renderer/win.rs. Here's a couple things I've been looking at regarding this issue. The first thing I noticed on the machine I develop on when I go into Windows and run nog I'm getting 0 out from the calls to GetSystemMetricsForDpi which according to the docs is an error. I changed some of the code locally a bit to test to attempt GetSystemMetricsForDpi and then fall back on GetSystemMetrics which does give me results. A little verbose but something like this... let border_width = match GetSystemMetricsForDpi(SM_CXFRAME, display.dpi) { 0 => { use_dpi = false; GetSystemMetrics(SM_CXFRAME) }, x @ _ => { use_dpi = true; x } }; let border_height = if use_dpi { GetSystemMetricsForDpi(SM_CYFRAME, display.dpi) } else { GetSystemMetrics(SM_CYFRAME) }; And also calling AdjustWindowRectEx instead of AdjustWindowRectExForDpi if the Dpi functions returns 0. Doing that makes it behave a bit more consistently across my monitors but there's still size issues that I think are caused by the Windows rect functions and Chrome/Firefox using this shadow border thing. The weird offset rendering that @keepitsane posted above and that I've noticed on my machines matches the sizes of these shadow borders. I tried this to find the shadow offset in order to size the Chrome window better: let mut clientRect = RECT { bottom: 0, left: 0, right: 0, top: 0 }; GetClientRect(window.id.into(), &mut clientRect); let mut windowRect = RECT { bottom: 0, left: 0, right: 0, top: 0 }; GetWindowRect(window.id.into(), &mut windowRect); top += (windowRect.bottom - windowRect.top) - clientRect.bottom; And it looks a bit more aligned. The following shows the chrome window surrounded by notepad tiles. Left side is unfocused and right side is focused. But Firefox placed in the same spot behaves very differently: Additionally at some point when resizing tiles if I make a firefox tile too small it just starts ignoring the sizing like this (notice the left/right side alignments of the tile above the firefox window and the tile below it but firefox just overlaps everything) I'll keep messing with it but honestly if there's a way to just remove the shadow border on them that might be easier. I might experiment with manually setting styles. I'll keep messing with it but honestly if there's a way to just remove the shadow border on them that might be easier. I might experiment with manually setting styles. Initially I tried to automatically calculate the border size to then size the window correctly, but I couldn't manage to do it.
GITHUB_ARCHIVE
How to write a character losing time Stuff happens -blackout- More stuff happens This is both how the character is supposed to experience time, and how it's written so far. But it has a number of problems. The biggest ones are: It looks bad, I just know my readers will get annoyed with it It makes it extremely obvious that the time was lost. The character isn't supposed to remember that she forgot. The idea of the story is psychological horror played for laughs. A powerful and irresponsible wizard places a spell on a girl that temporarily causes her to periodically forget periods of time. Combined with not remembering that she forgot, this results in lost time, and eventually changes/ruins her life. But if you write things like Annie called the next customer forward, it was going to be a long day Annie put her keys down as she entered her apartment Then it just doesn't feel right. And not in a disturbing, mysterious way, but in a boring "this writer is incompetent" way. By the end of the first act, the main character will realize that she's losing time and start to try and figure out what's going on. But in the first act, she doesn't realize anything is wrong at all. She just goes about her daily life. This of course only makes the problem worse, and by the time she catches on, it's already been 2 weeks. I think I have a really strong premise and good structure, but I'm struggling with how to get an idea like this down on paper. Are there any authors that would be good to read? How do I even begin to approach this problem? Watching Memento might help you come up with ideas... There are some variants I can think of: Expand the spell to also make the character forget she's lost time and write the lost scenes as non-important scenes you'd usually skip anyway. Make the character really bored with life and when she loses time, like a bus ride or time at work she just shrugs and moves on. Maybe even congratulating herself for having perfected zoning out at work. Mess with the character's mind using drugs, sleep deprivation, medication, psychological disabilities, etc to make her not question that time is lost. Make the character reconstruct the lost time, using (incorrect) guesses and assumptions until she convinces herself that she was probably just going on autopilot. Make the character go, whoa! What just happened? WTF? I need help! 911!! Wait! Shit, they'll lock me up for sure... just like uncle Bob! No way! Hey! Maybe I teleported? No, I lost 2 hours! Whaaaat? I think you'd get the best effect if you cut scenes like you'd cut out unimportant scenes. Let's say, your character leaves her job, rides the bus, and then arrives home. But she forgets the bus ride. If, in a normal novel, nothing happened on the bus, we'd just not write that scene. It's perfectly ok for a character to leave work and arrive home in the next scene. Then, depending on how you've decided your character should react to the lost time she will either have some reaction or not. Once you've figured out how your character should deal with the lost time "jump" you'd let the effects of what the character did during that lost time start trickling back into her life and things moving around in her apartment, people coming to meetings she hasn't arranged, waking up in bed with strange people, the police wanting to interrogate her as a witness or suspect, a lone shark wanting his lone repaid, etc, will, if not before, give her an idea she's lost time... or it's just that usual payday weekend Saturday morning wake up call... The closest example of your story I can think of is Fight Club. It uses sleep deprivation to explain why the character doesn't figure out what's happening until far into the story. It exists as a novel or movie. I've read the novel, but I've forgotten any differences between it and the movie... Another story that is similar, is Shining Girls, where the main character jumps between realities and has to figure out the changes. (I only saw the TV show, in the novel things are apparently done differently...) When I first read your question it got me thinking of Memento, but I don't think the solution they used would work for this... or perhaps it could... perhaps you could do a hybrid solution. "Let the effects of what the character did during that lost time start trickling back into her life" I really like this analogy. Thanks!
STACK_EXCHANGE
by Michael S. Kaplan, published on 2007/11/06 10:16 -05:00, original URI: http://blogs.msdn.com/b/michkap/archive/2007/11/06/5928970.aspx Allan provided me with my daily scare back in the end of September with a note to me via the Contact link: I would so appreciate a little advice and wisdom on the font situation in vista, if you are ready, willing and able. I wonder if all those foreign language fonts are necessary as they add what seems to me to be a huge amount of clutter when getting down to business. I also notice that I am unable to cut and paste unneeded fonts from the font folder, to back them up. thanks for any assistance you can offer. I am ready, willing, and able now1. Now somewhere between the big font list Simon Daniels gave me back in April 2006 (from The big font list in Windows) and the info he gave me in February 2007 (from What are the fonts in Vista?) there are clearly a great many fonts in Vista. With all that I mentioned in that About the Fonts Folder series (parts 1, 2, and 3), I probably managed to convey the fact that not a whole lot of work has been done to make this folder cooler looking and easier to use, even as the number of fonts has gotten bigger and bigger with each successive version. With all of that going on, I knew that after the effort to fix all of the problems I talked about with What isn't in the default install for NLS by putting everything in there that for every person who was happy with this change and all that it enables that there would be somebody else such as Allan who feels like all of the clutter makes it hard to get down to business (and who would have been happy with over 60 fonts being taken off the big list by default!). Although I admit I am more sympathetic to the former than the latter, I do think that the user interface surrounding fonts really could use some work, something that has been true almost since the beginning. But there is no ready-made solution to change any of this, sorry about that! But the folks who didn't have enough installed to get their work done have suffered for several versions; perhaps with the no-good-nicks out, now God's chillun get their innings? :-) For the cut/paste issue, what about copy/paste? Or using the built in "Past versions" functionality or the built-in backup functionality? Seems like there are a lot of ways to do the backup that don't involve removing fonts.... 1 - DISCLAIMER: After I got the note I initially I fell over and broke something I'd rather not talk about other than to say that the cast just came off2. 2 - DISCLAIMER: The previous disclaimer was a dramatization, though I honestly did bite my tongue accidentally when I was reading the note. No cast and it is better now. This post brought to you by ᘺ (U+163a, a.k.a. CANADIAN SYLLABICS CARRIER TLU) # Gabe on 6 Nov 2007 10:52 AM: I suppose the font enumeration code should somehow be able to pay attention to the Hidden attribute. If you hide a font, you would still be able to ask for it by name, but it won't show up in font dialogs. That way you can still see text that happens to be written in some foreign script, but all those fonts being available wouldn't necessarily have to enlarge the font list in every app. # Johannes Roessel on 7 Nov 2007 2:30 AM: Well, he said that cutting/pasting fonts wasn't for backups but rather to remove clutter from the folder. go to newer or older post, or back to index or month or day
OPCFW_CODE
Get to grips with Logic's handling of external MIDI synths. Software synths have taken over many of the roles that were once fulfilled by keyboard or rackmounting instruments, but many of us still have favourite hardware synths we'd like to integrate into our systems. And although Logic Pro has very capable MIDI features, its handling of external MIDI synths is not quite as intuitive as it might be. It often turns out that there's more than one way to do a job, with no one way being clearly the 'right' way. For this article I'm going to go through the way I set up my own external Roland JV2080 (using only its stereo output). Dive into the Logic manual and you'll see a description of something called the External Instrument plug-in, which can be inserted on a Software Instrument track and can be found amongst the other Logic software instruments you have available. If you normally select patches from the front panel of your synth module, rather than via software, then there's no reason not to work using only this External Instrument plug-in. The main limitation is that you can't get direct access to an on-screen listing of your synth patches. Indeed, you can't see the patch names on screen at all unless you have created a Multi-Instrument in the Environment, and even then, you have to visit the Environment window to select them. Once you set the destination MIDI port and channel for your external synth on the plug-in, you can record MIDI data directly into the Instrument track and never have to worry about visiting the Environment. This is a very simple way to work, and if you need to set up a multitimbral instrument, just set up several External Instrument plug-ins on consecutive Instrument tracks with the appropriate MIDI channel set for each. Assuming you're happy to have all of the synth sounds coming into Logic on the same stereo channel, you need only set the audio output on the first one. Obviously, this should match up with a couple of free line inputs on your audio interface. If you want to be able to access your synth's patch names from the Arrange page, here's how I set up my own system to do it. Stage one is to create a new Multi-Instrument in the 'MIDI Instr' layer of Logic's Environment, logically enough by choosing 'Multi-Instrument' from the New menu. This shows up as a box of 16 numbered rectangles. If you haven't visited this page before you'll probably already find a GM instrument has been created for you, but ignore that (or even delete it if you're feeling bold) and create your new one, as this will be the 'avatar' for your physical hardware synth module. Your newly created instrument will have diagonal lines across all 16 sections, which means that they are currently deactivated. Click on each of the channels you intend to use to clear the diagonal line; to replace a diagonal line removed by mistake, select the part and then tick the box next to 'Icon'. As this name suggests, you can also give your synth an appropriate visual representation, and set the actual MIDI channels to correspond to the numbered rectangles if they don't do so already: click on rectangle 1, set the MIDI channel to 1, click on rectangle 2, set its MIDI channel to channel 2, and so on. If your synth gives you drums on MIDI channel 10, set 'No MIDI Transpose' in the parameter box for this part. When you click the top of the Multi-Instrument object to show its global settings, the icon box should be ticked and the MIDI channels should read 'All'. If you have a multi-port MIDI interface you also need to set the port here. Also ensure you click the Program, Volume and Pan boxes for each part so your synth will respond to these commands. By working this way, you can enter patch and bank names into the Multi-Instrument object along with the type of bank change command your synth needs. The default is a set of GM patch names, but you probably won't want those. Typing in hundreds of patch names is less than fun; I did all 1000 patches for an Oberheim Matrix 1000 once and my fingers still hurt! Fortunately you'll find many common instruments online already done for you. Bless those friendly Logic users! All you need to do is open up a song containing the desired Multi-Instruments, double-click on one of the parts and then, when the patch list appears, select Copy All Patch Names from the Options menu. You now paste these into your own Multi-Instrument, this time by selecting Paste All Names from the Options menu. Alternatively, you can open the 'patch name' song at the same time as a new one and then copy and paste or drag the entire Multi-Instrument object from its Environment to yours. Logic can also import patch names from a text file, as long as each line is separated by a carriage return. So, if you can find a patch listing for your synth online, it shouldn't be too hard to create a text document that will import directly, again via the Options menu. If you have multiple hardware MIDI instruments that support patch changes, set up a Multi-Instrument for each — even those that aren't multitimbral — as only the Multi-Instrument objects can hold patch names. For synths that are monotimbral, only turn on the one MIDI part you wish to use. Note that Logic doesn't enable you to name banks; these always show up named after the first patch in the bank. There's also a 14-bank limit, which I find very frustrating, as my JV2080 is full of expansion cards and offers 16 banks of sounds. Some users set up a separate Multi-Instrument to access these extra banks, or they simply type in the required bank and patch data in the Event List. I've whinged about the 14-bank limit for around 10 years now but still it remains to taunt me! OK, that's the MIDI side of the instrument more or less set up, but how do you get its audio into Logic's mixer? You'll need an audio interface with at least two spare inputs (for a stereo synth) then, in the Arrange page, you can either use an Aux input channel or, more elegantly, Logic's own External Instrument plug-in. As already described, this plug-in enables you to route MIDI information to an external MIDI hardware module. However, in my example we've already looked after the MIDI routing in the Environment page, so all we need to set is the physical audio inputs to which the synth is connected. Leave the MIDI part set to blank. You get a choice of mono or stereo, and all your physical interface inputs are shown in a drop-down menu. This plug-in also has a gain slider so you can tweak the overall level if the synth output is a bit on the low side. The next stage is to create a series of external MIDI tracks in your Arrange window. In the screenshot I've created eight, as I want to use my JV2080 to give me eight instrument parts on eight MIDI channels. Additionally I also activated Channel 10, as that provides drum sounds. What you should end up with is one master Software Instrument track where the audio from the synth comes into the Logic mixer, plus a separate MIDI track for each 'part' of the instrument. The safest way to assign tracks once created is to use the 'Reassign Track' right-click option. This ensures they're linked to the correct MIDI instrument and correct part. The icon you set up in the Environment for your Multi-Instrument should also appear in the Arrange track header. Each of these parts will have its own volume and pan controls, plus two controller knobs for sending the GM controller data for chorus and reverb. Note that if you create a MIDI Instrument track and it happens not to default to the instrument you want, you shouldn't try to change it by adjusting the port or channel settings in the Arrange page parameter box as you may inadvertently change what you set up in the Environment. Stick to 'Reassign Track'! Now, if you double-click on the track name in the parameter box to the left (or on the name at the bottom of the Arrange page track fader at the bottom left of the screen), the patch list you so painstakingly set up in the Environment will open and you can select both the bank and patch you wish to use. Ensure that in Project Settings (in the main File menu) you visit the MIDI section and tick Used Instrument MIDI Settings so that the correct MIDI information is sent to the external device when you open a song. In theory, ticking this last box should be all you need to do to ensure that the right sound is called up when you load a new song, but on occasions it hasn't seemed to work as reliably as it used to with Logic 8. So, now I adopt a belt-and-braces approach. Once you have all the sounds set up and you've recorded some data, go to the MIDI menu and select 'Insert Instrument MIDI Settings As Events'. This places bank and patch data right at the start of your tracks just before the first note played. You'll see the patch numbers at the start of the first region in each MIDI track on the Arrange page. Additional mid-song patch changes can be inserted as MIDI events in the Event List, should you need them. Once you've got the audio from your synth coming into Logic's mixer, you can insert processing plug-ins to treat the overall output from the synth, and the same goes for sends, bus routing and so on. Just remember that when using hardware MIDI instruments you must perform bounces in real time, not offline. And, finally, though setting up your MIDI instruments isn't difficult, it does take time, so make sure you save all your hard work in a template song and also make backups. You wouldn't want to have to do it again, would you?
OPCFW_CODE
With the incease in use of computers, smart phones and internet in daily life the need for security in information and systems increased. Hacking is the activity of identifying weaknesses in a computer system or a network to exploit the security to gain access to personal data or business data and hacker is a person who finds and exploits the weakness in computer systems and/or networks to gain access. Hackers are usually skilled computer programmers with knowledge of computer security. There 03 main types of hackers, they are: - Cracker (Black hat): A hacker who gains unauthorized access to computer systems for personal gain. The intent is usually to steal corporate data, violate privacy rights, transfer funds from bank accounts etc. - Grey hat: A hacker who is in between ethical and black hat hackers. He/she breaks into computer systems without authority with a view to identify weaknesses and reveal them to the system owner. - Ethical Hacker (White hat): A security hacker who gains access to systems with a view to fix the identified weaknesses. They may also perform penetration Testing and vulnerability assessments. So Ethical hacking, also known as penetration testing or pen testing, is legally breaking into computers and devices to test an organization’s defenses. Key concepts of Ethical hacking - Stay legal. Obtain proper approval before accessing and performing a security assessment. - Define the scope. Determine the scope of the assessment so that the ethical hacker’s work remains legal and within the organization’s approved boundaries. - Report vulnerabilities. Notify the organization of all vulnerabilities discovered during the assessment. Provide remediation advice for resolving these vulnerabilities. - Respect data sensitivity. Depending on the data sensitivity, ethical hackers may have to agree to a non-disclosure agreement, in addition to other terms and conditions required by the assessed organization. Phases of hacking There are 05 phases of hacking, they are: - Gaining Access - Maintaining Access - Clearing Tracks The first step of hacking, also called footprinting and information gathering phase. This is the preparatory phase where we collect as much information as possible about the target. Normally information is collected about 03 groups. - People involved Three types of scanning are involved: Port scanning: This phase involves scanning the target for the information like open ports, Live systems, various services running on the host. Vulnerability Scanning: Checking the target for weaknesses or vulnerabilities which can be exploited. Usually done with help of automated tools Network Mapping: Finding the topology of network, routers, firewalls servers if any, and host information and drawing a network diagram with the available information. This map may serve as a valuable piece of information throughout the hacking process. In this phase attacker breaks into the system/network using various tools or methods. After entering into a system. he has to increase his privilege to administrator level so he can install an application he need or modify data or hide data. The hacker can maintain access with system by using Trojans, Rootkits or other malicious files. The aim is to maintain the access to the target until he finishes the tasks he planned to accomplish in that target. In this phase the hacker clears and traces leading to him. This involves modifying/corrupting/deleting the values of logs, modifying registry values and uninstalling all applications he used adn deleting all the folders he created. What problems does ethical hacking identify? Some of the most common vulnerabilities discovered by ethical hacker are: - Injection attacks - Broken authenication - Security misconfigurations - Use of components with known vulnerabilities - Sensitive data exposure Limitations of ethical hacking - Limited scope. Ethical hackers cannot progress beyond a defined scope to make an attack successful. However, it’s not unreasonable to discuss out of scope attack potential with the organization. - Resource constraints. Malicious hackers don’t have time constraints that ethical hackers often face. Computing power and budget are additional constraints of ethical hackers. - Restricted methods. Some organizations ask experts to avoid test cases that lead the servers to crash (e.g., Denial of Service (DoS) attacks).
OPCFW_CODE
Solaris Zones in the Real World At one of the clients I’m assigned to at the moment, we’re moving our development environment to Solaris 10 on Sun x4100 servers. We have two physical machines, one for our CruiseControl environments, and one for all our testing. To make good use of the resources we have (Dual Core CPU’s, lots of RAM) I’ve been carving them into zones. I’ve tinkered with zones in Solaris 10 ever since the first beta build that featured them, but it was always for little things and never anything serious. Consequently I thought they were quick and painless. Note the use of the word “thought”. Don’t get me wrong, they are the (almost) perfect solution for what we need, it’s just that if you’re planning on doing anything serious with them, here’s a list of gotchas you need to take in to consideration. The first problem I bumped into was stability. Once I had configured and booted the zones I wanted, I discovered that ssh-keygen would segfault and dump core when run from within a zone. Normal ssh and scp commands would also occasionally segfault as well. After some very light scratching around, I decided to patch the systems to see if that made the problem go away. It did not – so I replaced it with the Sun Freeware OpenSSH package. This is when I found the next problem – patching. For understandable reasons, Sun have restricted access to patches for Solaris 10. The days of just pulling down the latest recommended patch cluster to sort out your machines have gone. Sun now recommend you use Update Manager, a Java GUI app that registers your machine with Sun, and lists what patches you can download. All sounds reasonable in theory, but the first few times I tried it, it kept on blowing out with a com.sun.cns.authentication.CMDExecutionException. Turns out it’s broken on machines with Zones, and you need to manually download and apply a patch to fix it. Another thing to remember when patching is to make sure that all zones you have configured have been properly initialized, and that you’ve been through the system identifications at first boot. And then we have the niggles category. There is no lsof package for amd64 Solaris 10 yet, but you can script most of what you need using pfiles and fuser. DTrace only works in the global zone. While the global zone has all the rights it needs to trace what’s going on, if you’re running more than two or three zones trying to find the specific process you’re trying to debug can be a pain. With all this pain, is it worth it for development and testing? Most definitely! Zones allow you to have all the production-like environments you need for testing, or even just for developers to spike ideas in. Clients are happy because they don’t need to buy so much hardware. Testers are happy because if they need a new environment they can have it in a matter of hours instead of days or weeks. Sys Admins are happy because they don’t have to keep finding rack space for more machines.
OPCFW_CODE
|Annu. Rev. Astron. Astrophys. 1994. 32: Copyright © 1994 by Annual Reviews. All rights reserved 4.2. Window Functions It has become conventional to describe the details of the instrument and the observing strategy in terms of a window function W which describes the sensitivity of the experiment to the modes of the spherical harmonic decomposition of the CMB temperature fluctuations. The signal seen by any experiment can then be considered as the convolution of the sky power and the window function If one takes an ensemble average (over universes) of this expression, then a2 <a2> = (2 + 1)C. Often this ensemble average is assumed when the window function is computed. The simplest and most common window function is that due to finite beam resolution. As expected, finite resolution introduces a high- cutoff. If the beam has a Gaussian response with a Gaussian width of , the window function is (see e.g. Silk & Wilson 1980, Bond & Efstathiou 1984, White 1992) For an experiment that measures temperatures by differencing 2- or 3-beam setups, the window functions, in addition to the beam smoothing factor, are (see e.g. Bond & Efstathiou 1987) where is the angle between the beams. Note that these types of experiments are not sensitive to the low- modes of the multipole expansion because of the differencing. Since the high- cutoff is controlled by the beam width while the separation (or chop) controls the low- behavior, one can increase both the width and height of the window function by separating these scales as much as possible. Such a double- or triple-beam differencing strategy is often called a square wave chop. There are, however, other scan strategies that have been used. Several experiments (in particular South Pole, Saskatoon, and MAX) use a sine wave chop, moving the beam continuously back and forth across the sky, sinusoidally in time. Additionally, the temperature is weighted by ± 1 or by a harmonic of the chop frequency. The resulting time-integrated, weighted temperature is then the "difference" assigned to that point on the sky. Window functions for these experiments can be found in Bond et al (1991b), Dodelson & Jubas (1993), White et al (1993), and Bunn et a1 (1994b). [The window function for MAX, given in White et al (1993), should be multiplied by 1.13 to account for the finite size of the beam on the calibration: see Srednicki et al (1993).] There are also several interferometer experiments which make maps of the intensity of the radiation on small patches of the sky [e.g. ATCA (Subrahmanyan et al 1993), VLA (Fomalont et al 1993), and Timbie & Wilkinson (1990)]. The window function for these experiments can be measured as the Fourier transform of the beam pattern and for accuracy needs to be supplied by the experimenters. We show the window functions vs for several experiments in Figure 5. Some numbers describing the functions shown here are given in Table 2. Note that the relative heights can have as much to do with the treatment of the data as with the sensitivity, i.e. the window function that is convolved with theory should be consistent with the observers' T / T. It is worth giving an example to illustrate this. Consider a triple-beam set-up, which consists of the difference of a difference of two temperatures. The experimenters could choose to assign a measurement of T1 - 1/2(T2 + T3) to a point in direction "1," or they could have chosen to take 2T1 - (T2 + T3). Figure 5. The window functions for large- and medium-scale experiments as a function of multipole. From left to right the experiments are COBE (with 10° smoothing), FIRS, Tenerife, SP91, Saskatoon (dashed), Python (dot-dashed), ARGO, MAX, MSAM (3-beam, dashed). White Dish (Method II, neglecting binning), OVRO, and ATCA, Some parameters of the window functions are displayed in Table 2. In the latter case, the window function would be four times larger and the "measured" (T / T)rms would be two times bigger. The difference in height for the window function would be artificial. While in this case the difference is quite obvious, in some instances the effects can be more subtle. Experimentalists must therefore be explicit about their sampling, weighting, and calibrations before the correct window functions can be computed. |a represents the multipole at the maximum; 1 and 2 are the "half peak" points. The maximum value of the window function is also given. For MSAM we present results in 2-beam and 3-beam modes.| Common approximate formulae for the window functions or analysis procedures assume a square wave chop (e.g. Górski 1993, Gundersen et al 1993). This approximation usually does not reproduce the beam pattern on the sky all that well, although it works better for the window function. Even so, such approximation; differ from the exact results, e.g. for MAX the difference between the exact result and (29) is ~ 10% near the peak, and larger off-peak. Given both a theory and the window function, it is straightforward to compute the expected rms temperature fluctuation. In Table 3, we show the predicted T / Trms for various experiments, normalized to A = 1. The predictions assume full sky coverage and an "average universe," though actual experiments may measure different values due to incomplete sky coverage or cosmic variance (to be discussed later). It is sometimes possible to define window functions that correspond to off-diagonal elements of the correlation matrix or averages of the form which are required when fitting data. (Note that this is different from the sky-averaged correlation function of the COBE group. It is not an average over our observed sky, but the covariance matrix required when computing likelihood functions, assuming Gaussian statistics for the temperature fluctuations.) In general, the window function approach works well for computing T / Trms or for experiments in which the data span only one dimension (such as the individual linear scans of the ACME South Pole experiment). In other cases, however, the data are two-dimensional on the sky and there can be strong anisotropies in the theoretical covariance matrix which are difficult to include in this manner. Alternative approaches are then preferable (see e.g. Srednicki et al 1993). Also, if the scanning strategy or data analysis procedure is sufficiently tortuous, the window function approach is extremely complicated and simulations of the scanning, binning, and analysis become necessary. Coarse binning of data in an experiment which scans smoothly (rather than "stepping") across the sky is one example of this, where correlations introduced by the binning will be important. |a We show the predicted T / Trms for various experiments, normalized to A = 1. The predictions are for an all-sky average and an "average universe"; individual experiments may measure different values due to incomplete sky coverage or cosmic variance. For MSAM the predictions are shown for 2-beam and 3-beam modes. The column B = 0 refers to an n = 1 power spectrum. All values assume CDM with 0 = 1 and h = 0.5.|
OPCFW_CODE
Crypto key zeroize pubkey-chain - Crowd investing startnext logo You can use this command: SWITCH(config)#crypto key zeroize? ec Remove EC keys pubkey -chain Remove peer's cached public key. Solved: Hi Dear, How to delete user crypto key pubkey-chain ssh Use the crypto key zeroize pubkey-chain command in Global Configuration. As with any VPN configuration, management of RSA keys is not a difficult task, NewYork(config)# crypto key pubkey-chain rsa. NewYork. BOOMTOWN CASINO APPLICATION The larger the modulus, the more secure the RSA key. However, keys with large modulus values take longer to generate, and encryption and decryption operations take longer with larger keys. When you generate RSA key pairs via the crypto key generate rsa command , you will be prompted to select either usage keys or general-purpose keys. With usage keys, each key is not unnecessarily exposed. Without usage keys, one key is used for both authentication methods, increasing the exposure of that key. General-purpose key pairs are used more frequently than usage key pairs. How RSA Key Pairs are Associated with a Trustpoint A trustpoint, also known as the certificate authority CA , manages certificate requests and issues certificates to participating network devices. These services provide centralized key management for the participating devices and are explicitly trusted by the receiver to validate identities and to create digital certificates. Caution Do not manually generate an rsa keypair under trustpoint. If we want to manually generate the keys, generate the key pairs as usage-keys and not as general-purpose keys. Caution Certificate renewal with regenerate option does not work with key label starting from zero '0' , for example, '0test'. CLI allows configuring such name under trustpoint, and allows hostname starting from zero. When configuring rsakeypair name under a trustpoint, do not configure the name starting from zero. When keypair name is not configured and the default keypair is used, make sure the router hostname does not start from zero. Each of these steps is discussed in detail in the following sections. It is best to have every portion of the configuration defined before you begin the implementation. Configure the Router Host Name and Domain Name An important part of authentication is that the system must be able to correctly identify itself. For this reason, you must configure the host name and domain name of the router. By configuring the host name and domain name on the router prior to generating the RSA keys, you can be sure that the router keys properly identify the router. To configure the host name of the router, use the hostname command while in the global configuration mode. To configure the domain name of the router, use the ip domain-name command with the correct domain name for the router. REDDIT COLLEGE BASKETBALL BETTING LINE The gateways may be specified using IP addresses or host names. If the giaddr keyword is not configured, the Easy VPN server must be configured with a loopback interface to communicate with the DHCP server, and the IP address on the loopback interface determines the scope for the client IP address assignment. Allows you to enter your extended authentication Xauth username. The group delimiter is compared against the group identifier sent during IKE aggressive mode. Because the client device does not have a user interface option to enable or disable PFS negotiation, the server will notify the client device of the central site policy via this parameter. Output for the crypto isakmp client configuration group command using the key subcommand will show that the preshared key is either encrypted or unencrypted. To limit the number of connections to a specific server group, use the max-users subcommand. To limit the number of simultaneous logins for users in the server group, use the max-logins subcommand. Caution Certificate renewal with regenerate option does not work with key label starting from zero '0' , for example, '0test'. CLI allows configuring such name under trustpoint, and allows hostname starting from zero. When configuring rsakeypair name under a trustpoint, do not configure the name starting from zero. When keypair name is not configured and the default keypair is used, make sure the router hostname does not start from zero. If it does so, configure "rsakeypair name explicitly under the trustpoint with a different name. As a result, the Cisco IOS software can match policy requirements for each CA without compromising the requirements specified by the other CAs, such as key length, key lifetime, and general-purpose versus usage keys. Named key pairs which are specified via the label key-label option allow you to have multiple RSA key pairs, enabling the Cisco IOS software to maintain a different key pair for each identity certificate. Any existing RSA keys are not exportable. New keys are generated as nonexportable by default. It is not possible to convert an existing nonexportable key to an exportable key. The key pair that is shared between two routers will allow one router to immediately and transparently take over the functionality of the other router. If the main router were to fail, the standby router could be dropped into the network to replace the failed router without the need to regenerate keys, reenroll with the CA, or manually redistribute keys. Encrypting the PKCS12 or PEM file when it is being exported, deleted, or imported protects the file from unauthorized access and use while it is being transported or stored on an external device. The passphrase can be any phrase that is at least eight characters in length; it can include spaces and punctuation, excluding the question mark? online sports betting advertising flags finnish word for ethereal ethereum sidechain with hyperledger 10 is again walkthrough investing for dummies wynn sports betting app
OPCFW_CODE
New release? It has been 2 years since the last official release, and there have been over 200 commits to master since 0.10. There has been some drift between the documentation and the functionality a user gets from downloading the 0.10 version, what would it take to publish 0.10.2 (a milestone exists, just waiting for a fixed bug to be closed) or 0.11 just to bring us into the new decade? We usually try to cut a release so that it includes a specific new feature, and (incidentally) around a hacking event/gathering of some kind. And since we've slowed down on both it kind of explains why we haven't cut any new one. That said, I don't see any obstacle that would prevent us from baking one, so I suppose we could. @bradfitz , WDYT? friendly ping. on the brew side this formula remains broken and would be nice to get it building or maybe potentially remove it from brew if it cant be fixed cc @bradfitz for thoughts I'll add my unrequested two cents. I think it would be important for the project to release a new official release because there are potential users that assume the development is stalled because they didn't see a new version in two years, eg. here: https://www.reddit.com/r/DataHoarder/comments/hijitn/perkeep_permanently_keep_your_stuff_for_life/fwgjvh4?utm_source=share&utm_medium=web2x The new release doesn't need to have any special new feature, just set it as a commit that works and announce the new release on the website. Then stop implementing new functionalities for a while (doesn't need to be a long time, maybe just a couple of months is enough) and move on to fix some of the usability issues mentioned in #1244: to me, it makes little sense to implement new things while most users struggle to use what's already there. Most of them are UI issues that would take little time to fix but would greatly improve the user's perception of this project. Having some (little) side projects myself I've often (always?) followed the approach "implement the hard stuff first, polish things later", only to find out that, since usually the hard stuff is also the most interesting part of the project, once I've implemented it I lose interest and never go on to the polish part, because it's boring. This is ok for side projects, but if you wish Perkeep to be something more than your own project you need to do also the boring part. In this regard, having a new release could attract more users and maybe some new developers that could help to improve the interface, and this, in turn, would attract more users. I would contribute myself if I knew go, but sooner or later I think I'll learn it and I might get in touch to contribute. I think that for this project the potential is there, but a lot of users that see it for the first time may have the feeling that it's not at the point where it should be after years of development. This perception can be greatly changed by UI and docs improvements as suggested in #1244, so in my humble opinion investing the next two/three months in UI and docs will worth it in the long run. Thank you, @GTP95! Very well put! I am one of those interested users/developers. I keep coming back to check in on the Perkeep project and I keep not diving in because of exactly what you just wrote. Just wanted to chime in to say I'm a potential user who is in the same boat -- I saw the last release was in 2018 and I immediately went to ignore this, but then found the GitHub project and realized it had a merge in the last 2 months so it's not completely dead. Most users, however, won't do that and even if they are minor bug fixes getting released as sub-versions (which is what most projects are doing these days), it would project a much stronger/stable image to the community that development is ongoing, potentially even interesting more developers to help contribute. @mpl, if you want to roll a release, SGTM. ACK, will try/do. I'd also be happy to learn how, so that it doesn't always fall on you. Great to see that finally something is moving! I've since moved on to use Nextcloud, but I will for sure come back in the future to check how things are going, especially regarding the ability to share files with other people and to reserve some space to let others have an encrypted backup of their instance.
GITHUB_ARCHIVE
The current build system seems for me a little outdated. Any ideas for switching to Gradle instead of Ant, and git instead of svn (maybe use GitHub)? I have some solid experience with this, maybe could help with this. :) I don't know how to use Gradle, but I wouldn't be opposed to learning it. I am getting a little better at using git, but pretty much all I can do right now is make commits and pulls. Sourceforge has manages our website and ticket tracker for us, while I don't think github does that, so I probably wouldn't want to switch to github. All in all, I'd be fine with changing if someone wanted to do this, so long as they could also do some explaining of what is going on and how to use it, etc. @bernat - I suggested something similar to this about a year ago when I first started working on the AI. It seems Veqryn is a little more open to it now as he wasn't thrilled about it then :) I'll warn you though that the code base is rather large in size and when I tried to convert it over to github wasn't able to get all the history to convert. I ended up just having to drop the current code base in here: https://github.com/ron-murhammer/TripleA For a little while, I developed there and then generated a patch to commit back to sourceforge but it became too painful so I just decided to use SVN/sourceforge. Essentially, you'd need to have a website to host the forum and the pages that are up on sourceforge now. You could move some stuff over to the github wiki but would still need some solution for the rest. Can you provide e-mails for your Github accounts? :) Just to link together users :) If you do not have you may register. No need to post it here, you may send me as message, or post here your Github account (thats public either way) so, like i said above, github can't do everything we need, so it doesn't make sense to switch to it right now. i think it makes more sense to switch from ant to gradle first. if when i step further out of the picture, redrum and others want to switch over to github, that is fine, but I would ask the github account be set up under a non-personal github account. we currently have an "official" email which I, sgb, bung, and redrum have access to: "tripleadevelopers <at> gmail.com", and I think it would make the most sense to set up triplea under that email, rather than your personal email. Yes, I know :) I thought we could keep the stuff Github cannot do here, and the commits there. As for the personal account, that's fine. I can move (transform) ownership at any time. I used the personal account just for the POC part. For doing the actual transform (source code from SF to Github) I'll need the commiters github account, as I do not think we can change the commit ownership later. So we'll need to redo this later (then perhaps under a non personal github user, of course). This just shows is doable easily. One thing, during the switch ant to gradle do we intend absolutely to keep the current folder structure, or can we adhere to the Java conventions. That is src code in src/main/java, and resource in src/main/resource? Commit history will of course remain as the files will be only renamed, not moved. This is doable in git, so I suppose in SVN too. At the moment it creates the jar. It does not work because the data files are not threated as resources in the current state, but as Java files. Will need to change sligthly the resource loader to fix this. Once added, you just need to enter ./gradlew jar (on Linux), gradlew jar (on Windows) to build. No need to install anything, Gradle will download an install itself. All project dependencies are downloaded from the maven center (upgraded a few of them to latest), and as such these are not needed in the svn repo (we may delete these later on, to decrease svn size). Please note that as I see the binary files were not added in the diff, so you cannot try it out actually without a gradle on your system. If you have any questions, or require my support message me, maybe we can setup a chat assist at some time :)
OPCFW_CODE
/** * Created by wuhao on 2017-02-11. */ (function () { angular .module("WebAppMaker") .controller("WebsiteListController", WebsiteListController) .controller("NewWebsiteController", NewWebsiteController) .controller("EditWebsiteController", EditWebsiteController); function WebsiteListController($routeParams, WebsiteService) { var vm = this; // /user/:uid/website vm.userId = $routeParams["uid"]; function init() { var promise = WebsiteService.findWebsitesByUser(vm.userId); promise.success(function (websites) { vm.websites = websites; }); } init(); } function NewWebsiteController($routeParams, $location, WebsiteService) { var vm = this; vm.userId = $routeParams["uid"]; // event handler vm.createWebsite = createWebsite; function init() { // event handlers var promise = WebsiteService.findWebsitesByUser(vm.userId); promise.success(function (websites) { vm.websites = websites; }); } init(); function createWebsite(website) { var promise = WebsiteService.createWebsite(vm.userId, website); promise.success(function (newSite) { $location.url("/user/" + vm.userId + "/website"); }); promise.error(function (res, statusCode) { vm.error = "Cannot create site."; }); } } function EditWebsiteController($routeParams, $location, WebsiteService) { var vm = this; // /user/:uid/website/:wid vm.userId = $routeParams["uid"]; vm.websiteId = $routeParams["wid"]; // event handler vm.deleteWebsite = deleteWebsite; vm.updateWebsite = updateWebsite; function init() { var sitesPromise = WebsiteService.findWebsitesByUser(vm.userId); sitesPromise.success(function (websites) { vm.websites = websites; }); var sitePromise = WebsiteService.findWebsiteById(vm.websiteId); sitePromise.success(function (website) { vm.website = website; }); } init(); function deleteWebsite() { var deletePromise = WebsiteService.deleteWebsite(vm.websiteId); deletePromise.success(function () { $location.url("/user/" + vm.userId + "/website"); }); deletePromise.error(function (errorBody, errorCode) { vm.error = errorCode + " Failed Deleting the website. " + errorBody; }); } function updateWebsite() { var updatePromise = WebsiteService.updateWebsite(vm.websiteId, vm.website); updatePromise.success(function () { $location.url("/user/" + vm.userId + "/website"); }); updatePromise.error(function (errorBody, errorCode) { vm.error = errorCode + " Failed Updating the website. " + errorBody; }); } } })();
STACK_EDU
Team foundation server reports using analysis service We faced to problem with building reports using tfs analysis serveces. We are connecting to analysis service from excel and trying to make some filters. E.g. changed date and Changed By. Changed date filter have all empy values in its list. It is the first, biggest problem. The second problem is in users list. One part of users in list presented by name and another part - by SID. I found that table dbo.DimPerson in tfs_Warehouse database have values that i see in report. And Table dbo.DimDate have alll values = NULL. Did anybody solved a problem like this? Or where i could find a solution? is the TFS server ever moved (from a workgroup to a domain or from a domain to a workgroup, or across domains in Active Directory, or change the domain)? No, server did not movedd. But recently we had a global AD accounts renaming. What do you mean by AD accounts renaming? Did you just change the AD users' display name? Or also made other changes in AD? As our system administrators said, they changed only accounts attributes. if only the accounts attributes changed, TFS is able to sync the changes. Can you give me screenshot shows how you define the pivot table report? And also the result of the dbo.DimPerson table? @Vicky-MSFT Added a comment with a screenshots to the answer below. Looks like that the Tfs_Warehouse database isn't refreshed with the latest data. You can refresh it manually or create a totally new one. Please check my reply in this link for the details: How can one build the TFS cube from scratch? Thank you! I'll try it asap! I rebuilt the database. and now tables are filling with data. Now data are the same as they was...maybe it will be better wheni it ends. @AlexeySoloviev, so the dbo.DimDate table still have all data shows 'null'? And dbo.DimPerson table has SIDs shown in the ‘Name’ column? Lets continue in this thread. here are screeenshots of pivot table in excel, DimDate And DimPerson table: Defining pivot report table, Data in excel, Filter in excel, DimPerson, DimDate After deleting and recreating of Warehouse and Analysis Databases they where filled with same data as before. DimDate is still filled with NULL. DimPerson - as at screenshots above. @AlexeySoloviev, 1). from the 'DimPerson' screenshot you provided, it looks like that the TFS server was migrated (from one domain to another). Please run the "TFSConfig Identities" command to check whether there are different SIDs: (https://msdn.microsoft.com/en-us/library/ms253054(v=vs.120).aspx) 2). For the 'DimDate' table, do all rows contain Null values? Or just old date (DateTime is 2007 or 2012) has Null values? yes, all rows contains NULLs. More than that. No one row contain something without NULL. Ok, i will run TFSConfig Identities and write the result here Our Tfs was not ever migrated. I ran TFSConfig identities. 2590 security identifier(s) (SIDs) were found stored in Team Foundation Server. Of these, 1456 were found in Windows. 2 had differing SIDs. Obly 2 Different SIDs also we updated sql server to 2014. But all actions from above did not helped
STACK_EXCHANGE
PCA on word2vec embeddings using pre existing model I have a word2vec model trained on Tweets. I also have a list of words, and I need to get the embeddings from the words, compute the first two principal components, and plot each word on a 2 dimensional space. I'm trying to follow tutorials such as this one: https://machinelearningmastery.com/develop-word-embeddings-python-gensim/ However in all such tutorials, they create a model based on a random sentence they use and then calculate PCA on all the words in the model. I don't want to do that, I only want to calculate and plot specific words. How can I use the model that I already have, which has thousands of words, and compute the first two principal components for a set list of words I have (around 20)? So like in the link above, they have "model" with only the words from the sentence they wrote. And then they do "X = model[model.wv.vocab]", then "pca.fit_transform(X)". If I were to copy this code, I would do a PCA on the huge model, which I don't want to do. I just want to extract the embeddings of some words from that model and then compute PCA on those few words. Hopefully this makes sense, thanks in advance. Please let me know if I need to clarify anything. Create a collection with the same structure (a dictionary) as model.wv.vocab, fill it with your target words, and compute PCA. You can do this using the following code: my_vocab = {} for w in my_words: my_vocab[w] = model.vw.vocab[w] X = model[my_vocab] pca.fit_transform(X) Thank you! I realize this is a super basic question, sorry, I'm new to word2vec and just learning as I'm going. So my model is called twitter_model, if I just do X = twitter_model['word1', 'word2', etc] pca.fit_transform(X) that should work? Is that what you mean by creating a collection or does that need to be a separate step? @curlypie99 No, unfortunately, the collection I mentioned should specifically be a dictionary mapping words to Vocab entries (a special gensim datatype). Using the provided code should produce expected results. @curlypie99 You are very welcome, good luck with your research (y) @CaptainTrojan: Don't we need to do a normalization step before feeding the embeddings to PCA? @aspiring1 If you normalize the word vectors, they will lose a part of their meaning. Scale matters here. If you were to normalize them, some words, which were distant, may appear close to each other. You only want to normalize your data before PCA when you're sure the individual variables have different scales, such as for house prices and house sizes for example, but word vectors are not like that. All dimensions matter equally.
STACK_EXCHANGE
To follow up on my Commentary "Collaboration 101," October 8, 2002, http://exchangeadmin.com , InstantDoc ID 26919, I want to dig a little deeper into the process of delegating tasks with task requests. To delegate a task, create a new task in your Tasks folder, then click the Assign Task button on the open task window. When the To box appears, enter the name of the person to whom you want to delegate the task. Two check boxes on the task form appear so that you can specify what kind of updates you'll get as the delegate works on the task. If you select "Keep an updated copy of this task on my task list," whenever the delegate makes any change to the task, the delegate's copy of Outlook generates a special task update message and sends it to you. By default, Outlook automatically updates your copy of the task with the information in the task update when Outlook is idle for a few minutes. You can also open the task update message in your Inbox to force it to update the task immediately. The other check box on the task request form is "Send me a status report when this task is complete." If you select this box, the delegate's copy of Outlook sends you an email message with details of the task after the delegate marks it as complete. Although Outlook permits you to send a task request to more than one person, it suppresses the normal updating process if the task has multiple delegates. However, the completion report still works, and a delegate can manually generate a status report email message at any time by using the Actions, Send Status Report command. Although this report doesn't update the item in your Tasks folder, it at least tells you how work on the task is progressing. If you have a task that several people need to work on, an alternative approach is to subdivide it--create multiple tasks and assign each to a different person. Use the same category for each related task so that you can use the By Category view in your Tasks folder to group the tasks. When you delegate a task, Outlook automatically turns off the reminder on the original copy of the task in your Tasks folder and includes no reminder on the task request. The person accepting the task must set his or her reminder. When you open the delegated task from your Tasks folder, you won't see any way to set a local reminder for yourself (maybe to remind you to remind the delegate to get the task done) and, in fact, all the controls on the task form will be disabled. However, you actually can set a reminder for a delegated task. The trick is to bypass the form and go through the Tasks folder view. The default Simple List table view has in-cell editing turned on, which means that you can change the values of different Outlook properties by typing in the view. To add the necessary fields for setting a reminder, right-click the column headings, and choose Field Chooser. From the "All Task fields" list, drag the Reminder and Reminder Time fields to the column headings. You can then click in the Reminder column to turn the reminder on and type the time and date for the reminder in the Reminder Time field. Outlook will warn that you no longer own the task and the current owner might overwrite your changes. Thus, your local changes will persist only until the delegate sends the next update. Fortunately, the update process seems to affect only the properties related to the task progress and seems to leave intact any reminder that you set on your local copy. If you need to keep a fully editable local copy of the task, you can make an unassigned copy in your Tasks folder. Open the task, switch to the Details page, and click Create Unassigned Copy. By the way, task requests work not just within an Exchange organization but also over the Internet, as long as you haven't marked the recipient address in Outlook to receive only plain text format messages.
OPCFW_CODE
If a zombie were to walk into a computer store looking for a meal, he'd go home hungry. You'd think that when you walk into a computer store, you'd find mobs of people sporting the latest pocket-protectors and taped glasses all with Kleenex up their nostrils to stop the nosebleeds due to allergies. But these shower-phobic whiz kids are definitely there, but do not make up the only customer base. I would rather deal with a roaming pack of zombies then what is the typical computer store shopper. We're talkin' about the general public here people. And the general public don't know jack about computers. I don't want to come off like a high and mighty jerk (which I am), but with the overwhelming use of computers in our society today, one would think that most people would have a good handle on them. And I'm not implying that you have to be a computer guru to own a computer, because I am far from that state of being as well. All I'm asking is that people know how to run an anti-virus program, keep spyware off the computer, and maybe have some understanding about their hardware and drivers. Computer troubleshooting is hard, and most people will have to take it into a shop to get certain things fixed, but when your screen size is off, learn the basics and go into display settings. But even beyond this point of basic computer knowledge, the general public is scary when it comes to comprehension. Most of my friends and loved ones aren't computer geeks like me, but when they have a question and I explain what they should do about it, there is usually a nodding of the head and something clicking inside. I do enjoy a customer that does know his stuff (as long as he isn't a know-it-all-that-actually-doesn't) because it's nice to use the jargon that comes with computers. With all the anagrams and terms from USB to AGP to Socket 478 to DDR and so on, there is a lot to remember. Don't freak out about it, you really don't have to know all this stuff. Your local computer guy can help you sort this mess out. But generally, when you see that a computer that is called a PIII 866 Mhz, and one labelled P4 1.8 Ghz, you'd be safe to say that the P4 is better because it 4 is greater than 3, and the P4 costs twice as much. I'll get to my main beef here. It's not my problem that there is a lack of understanding about computers (which our school systems might be helping out with now), but the problem lies with dumb people. I knew that stupid people were out there, but I didn't realize how many there are. I'm not calling them dumb because they don't know the difference between PCI and ISA, I'm calling them dumb because they refuse to listen when someone is talking to them. Customers will ask me a question and I'll provide an answer. Like I mentioned before, most people will have that universal nod and look of comprehension, but then there is the otherside. Some people will just look at me with a vacant stare. So I restate my answer using simpler words and less tech jargon. Their vacant eyes seem to slowly devolop a milky glaze as their brain returns to its natural comatose state. No matter how many times I explain myself, draw pictures, resort to pantomime, nothing gets through. By this time I'm so frustrated I have to tell that they'll have to figure it out or bring in their machine. It may be that they really don't know shit about computers, but they know enough to get on the Internet and download God-knows-what to bitch it in the first place, but I don't think so. They may be afraid of this technological revolution in our society and resist learning about computers because of their longing for simpler days, but I don't think that's it either. I'm a nice guy and I do try to work with these people, but man, it's hard. Especially when I'm hungover. (Note: Anyone that found this page and was able to read it, you're probably smart enough to have this not apply to you. These are extremes cases, but there are more extreme cases than I would have thought before I got into this business.)
OPCFW_CODE
Upgrade from before the refactoring (rev114) Upgrade from rev114 does not work and the charms goes into an error state. Steps to reproduce Deploy the bundle series: jammy applications: tls-certificates-operator: charm: tls-certificates-operator channel: latest/edge revision: 27 num_units: 1 to: - "3" options: ca-common-name: Canonical generate-self-signed-certificates: true constraints: arch=amd64 zookeeper: charm: zookeeper channel: 3/edge revision: 114 num_units: 3 to: - "0" - "1" - "2" constraints: arch=amd64 machines: "0": constraints: arch=amd64 "1": constraints: arch=amd64 "2": constraints: arch=amd64 "3": constraints: arch=amd64 relations: - - zookeeper:certificates - tls-certificates-operator:certificates (make sure all units goes up fine) Run pre-upgrade actions juju run zookeeper/leader pre-upgrade-check --format yaml, make sure it completes successfully Run juju refresh zookeeper (that should update to revision 121) Expected behavior The upgrade should go through nicely, with no error. Actual behavior There are three problems here: The first refresh sets the units into failed because of the following error: juju.worker.dependency "uniter" manifold worker returned unexpected error: preparing operation "upgrade to ch:amd64/jammy/zookeeper-121" for zookeeper/0: failed to download charm "ch:amd64/jammy/zookeeper-121" from API server: Get https://<IP_ADDRESS>:17070/model/de79be2c-6cc3-4401-8d85-2a27c9c80c7e/charms?file=%2A&url=ch%3Aamd64%2Fjammy%2Fzookeeper-121: cannot retrieve charm: ch:amd64/jammy/zookeeper-121 The upgrade actually fails (not sure why) but the charms says unit-zookeeper-1: 13:09:27 ERROR unit.zookeeper/1.juju-log upgrade:2: Not all application units are connected and broadcasting in the quorum When update status runs, the charm goes into active/idle state, but there is a service down+the upgrade has failed (the user should see this in a blocked status) I'm reporting the point 1) into the juju channels, but 2) and 3) I believe they pertain to the charm here. Versions Operating system: Ubuntu 22.04 Juju CLI: 3.1.7 Juju agent: 3.1.6 Charm revision: rev114 to re121 LXD: 5.20 Log output Juju status Model Controller Cloud/Region Version SLA Timestamp tests overlord localhost/localhost 3.1.6 unsupported 13:23:14Z App Version Status Scale Charm Channel Rev Exposed Message tls-certificates-operator active 1 tls-certificates-operator latest/edge 27 no zookeeper active 3 zookeeper 3/edge 121 no Unit Workload Agent Machine Public address Ports Message tls-certificates-operator/0* active idle 3 <IP_ADDRESS> zookeeper/0* active idle 0 <IP_ADDRESS> zookeeper/1 active idle 1 <IP_ADDRESS> zookeeper/2 active idle 2 <IP_ADDRESS> Machine State Address Inst id Base AZ Message 0 started <IP_ADDRESS> juju-c80c7e-0<EMAIL_ADDRESS> Running 1 started <IP_ADDRESS> juju-c80c7e-1<EMAIL_ADDRESS> Running 2 started <IP_ADDRESS> juju-c80c7e-2<EMAIL_ADDRESS> Running 3 started <IP_ADDRESS> juju-c80c7e-3<EMAIL_ADDRESS> Running logs.txt Additional context PR #116 fixes point 2 and 3. Point 1 is more general and applies to any charm downloaded from the store. Interestingly, the failed units and logs did not show up when using a local charm from this PR. I will raise the discussion/bug on a juju channel.
GITHUB_ARCHIVE
//!##################################################################### //! \file Triangulated_Surface.h //!##################################################################### // Class Triangulated_Surface //###################################################################### #ifndef __Triangulated_Surface__ #define __Triangulated_Surface__ #include <nova/Geometry/Topology/Point_Cloud.h> #include <nova/Geometry/Topology/Simplex_Mesh.h> #include <nova/Tools/Utilities/File_Utilities.h> #include <cassert> namespace Nova{ template<class T,int d> class Box; template<class T> class Cylinder; template<class T,int d> class Sphere; template<class T> class Triangulated_Surface: public Point_Cloud<T,3>,public Simplex_Mesh<3> { enum {d=3}; using TV = Vector<T,d>; using INDEX = Vector<int,d>; using Mesh_Base = Simplex_Mesh<d>; using Point_Base = Point_Cloud<T,d>; public: using Point_Base::points; using Mesh_Base::elements;using Mesh_Base::number_of_nodes; using Point_Base::Add_Vertex;using Mesh_Base::Add_Element; Triangulated_Surface() {} Triangulated_Surface(const Array<TV>& points_input,const Array<INDEX>& elements_input) :Point_Base(points_input),Mesh_Base(points_input.size(),elements_input) {} void Write_OBJ(const std::string& filename) { std::ostream *output=File_Utilities::Safe_Open_Output(filename,false); if(output!=nullptr) { for(size_t i=0;i<points.size();++i) *output<<"v "<<points[i][0]<<" "<<points[i][1]<<" "<<points[i][2]<<std::endl; for(size_t i=0;i<elements.size();++i) *output<<"f "<<elements[i][0]+1<<" "<<elements[i][1]+1<<" "<<elements[i][2]+1<<std::endl; } delete output; } //###################################################################### void Initialize_Box_Tessellation(const Box<T,d>& box); void Initialize_Sphere_Tessellation(const Sphere<T,d>& sphere,const int levels=4); void Initialize_Ground_Plane(const T height=(T)0.); void Initialize_Cylinder_Tessellation(const Cylinder<T>& cylinder,const int resolution_height=4, const int resolution_radius=16,const bool create_caps=false); //###################################################################### }; } #include <nova/Geometry/Read_Write/Topology_Based_Geometry/Read_Write_Triangulated_Surface.h> #endif
STACK_EDU
It is not often that Apple makes the move from one type of architecture to another. For the Mac it has happened a total of three times. Motorola to Power PC in the early 90s, Power PC to Intel, in 2005, and now Intel to Apple Silicon; the last transition began last year with the release of the MacBook Air, Mac mini, and MacBook Pro with the M1 chip. In order to be able to make sure that the hardware and software was ready for users, Apple unveiled a program called the Universal App Quick Start Program. This program was announced on June 25th, 2020 at Apple’s World Wide Developer conference. If a developer applied for, and was subsequently approved, they would be able to receive a Developer Transition Kit, or DTK, machine. The 2020 Developer Transition Kit has the form-factor of a Mac mini and has the follow specifications: - An 8 -core A12Z (the same processor as the 4th Generation iPad Pro) - 512GB SSD - 16GB of RAM - Two USB-A ports - Two USB-C ports - One HDMI port - One Gigabit Ethernet port - 802.11AC wireless networking - Bluetooth 5.0 connectivity If you were an Apple Developer and you wanted to obtain one of these machines, you had to apply. If you were approved, you would be able to order one for $500. The program came with some stipulations. Some of these include: - The DTK is Apple’s property - The DTK would need to be returned to Apple. The program was to last up to one year and Apple would let you know when you needed to return the machine. Apple has begun doing so. Many developers began receiving an email that states the following: Thank you for participating in the Universal App Quick Start Program and your continued commitment to building great apps for Mac. Response to the new Macs has been incredible, and we love the fantastic experiences developers like you have already created for Mac users. Now that the new MacBook Air, Mac mini, and MacBook Pro powered by M1 are available, it’ll soon be time to return the Developer Transition Kit (DTK) that was sent to you as part of the program. Please locate the original packaging for use in returning the DTK. We’ll email you in a few weeks with instructions for returning the DTK. In appreciation of your participation in the program and to help with your continued development of Universal apps, you’ll receive a one-time use code for 200 USD* to use toward the purchase of a Mac with M1, upon confirmed return of the DTK. Until your program membership expires one year after your membership start date, you’ll have continued access to other program benefits, such as Technical Support Incidents and private discussion forums. This $200 code has some stipulations. The first is that it can only be used towards an M1-based Mac. The second is that the code is only valid until May 31st, 2021. With the previous transition from PowerPC to Intel, Apple also provided Developer Transition Kits to developers. At that time, when developers returned their machines, they received an iMac. I would not expect Apple to do the same thing this time, for a couple of reasons. The chief amongst these is that Apple is in a different situation this time. They have significantly more developers than before and likely have more 2020 Developer Transition Kits out in the world than in 2005. Secondly, Apple is not going to provide even a base model Mac mini for that price. At no point did Apple indicate that there would be any sort of compensation for partaking in the program. However, precent did show that they had done it before. As we are all aware, that does not mean that what happened in the past will happen again. Even with that though, there are a couple of issues as I see them. First, Apple made over $114 billion in revenue last quarter, and over $28 billion in profit. The $200 seems like a jab in the eye of developers, given how much profit that they made just last quarter. This is particularly true given that those who purchased the rental of the DTK helped Apple test and validate that the software and development tools were ready for production. Second, many developers already purchased M1-based Macs. Having a one-time use code for $200 off of an M1-based Mac is not going to help them with their previous purchases. This means that either developers will end up purchasing another M1-mac, which starts at $700 for the base model Mac mini, or they will let the code expire. Neither of these provides any good will towards developers. I think it would be great if Apple offered either $200 towards the purchase of an M1-based Mac or an extension to the Apple Developer program. I think this would make the most sense, because developers who already purchased an M1-based Mac would be able get some benefit without having to purchase another machine that they do not necessarily need. For individual developers that extension might be two years, and for enterprise developers it might just be for a year. Even though I like this idea, I do not see it happening. My reasoning for it not happening is the fact that the $200 towards the purchase of a new M1-based Mac only reduces the profits of Apple, but the cost of the machine is still covered. If Apple opted to give an extension, then that is services revenue that they would end up forgoing entirely, instead of just having a bit less profit overall. An alternative to providing an extension would be to extend the length of time that the code can be used. Maybe make it expire at the end of the year instead of the end of May. I am sure that some developers might still not end up using their code, but it would provide a bit more time for some developers to be able to purchase a machine. What would be a real kick in the pants would be that if the higher-end MacBook Pros are not released until June or July, because then it would really look bad for Apple to not have the true developer machines be purchased by developers. Overall, given how profit-motivated Apple is, as well as how they put developers at the bottom of the list, I do not see them changing anything at all. There are some developers that are perfectly fine with the $200, because they were not expecting anything. Honestly, that is the best approach to anything when it comes to Apple. Expect nothing, because then when you do get something, it is a big surprise. Ultimately, no matter what Apple does, it will just irk developers and may not generate all that much good will with them. It might have just been better for Apple to not offer anything, but they would get pushback for that approach as well. No matter what Apple decided to do, it would be a catch-22, but that is the burden you take on when you are the richest company in the world.
OPCFW_CODE
Git extension for versioning large files. Contribute to git-lfs/git-lfs development by creating an account on GitHub. 01/10/2015 · Git Large File Storage LFS replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like. Step 2: it calls ‘git lfs pull’ to bulk download. Once the clone is complete, git lfs clone moves on to the second step, which is to perform the same tasks as git lfs pull to download and populate your working copy with real LFS content. This is in itself a compound operation which does the following: Runs the equivalent of git lfs fetch to. How to get LFS working with SourceTree on Windows or at all. Usually when you open an LFS repo and SourceTree can't see the 'git lfs' command, it will prompt you to download the embedded git-lfs. I just tried this with 1.8.3 and it did that for me & installed the LFS 1.8.3 knows about. brew install git-lfs git lfs install. Spesso è necessario anche eseguire alcune impostazioni sul servizio che ospita il telecomando per consentirne il funzionamento con lfs. Questo sarà diverso per ogni host, ma probabilmente sarà solo spuntare una casella che dice di voler usare git lfs. GitHub doesn't provide a way to get the LFS files using the downloadable archives. Some projects include their own tarballs that contains the LFS files, and if so, you can use one of those tarballs. Git Large File Storage LFS is a Git extension that improves how large files are handled. It replaces them with tiny text pointers that are stored on a remote server instead of in their repository, speeding up operations like cloning and fetching. Bitbucket Server ships with Git LFS enabled at an instance level, but disabled for each repository. 参考资料 Git LFS 是 Github 开发的一个 Git 的扩展,用于实现 Git 对大文件的支持 使用目的 在游戏开发过程中,设计资源占用了很大一部分空间. 像png. On my Ubuntu system, I installed Git LFS as well as Git, and cloned a repo that has some of its files managed by Git LFS. But those files didn't download, other than a marker file. I didn't realize they weren't the whole file until I checked the file size, because they displayed in my file system with the right names, in the right places.. git lfs(git large file storage)使用 由于git是一个分布式设计,因此本地的版本库是一个全量的库。git主要用来托管文本类的文件,但工程中难免会有依赖库、资源文件等二进制文件。. If I do a fetch/checkout/pull, git download the content one file at the time andvery slowly. I compared all configuration amongst different Linux/Windows/Mac system and behaviour are all the same with latest version of git and git-lfs. git lfs 是git从1.8之后才开始支持的,但是git lfs也是另外的软件哦。 我的CentOS是6, git版本是1.7.1所以需要更新下。. lfs 장점 중 또 하나는 한번 설정해놓으면 git workflow를 해치지 않는다는 것입니다. lfs를 사용하기 위해서는 우선 lfs client를 먼저 설치해야 합니다. 설치는 lfs 홈페이지의 다운로드 페이지 에서 받으시면 됩니다. window는 installer가 제공되고 linux및 mac등은 다운 후 압축을 풀어보면 install shell script가. 06/03/2017 · From the Git LFS API, it shows that Git LFS can use gitcredentials to access the server GitHub. On our Windows slave, this just worked. On the Mac slave however, because the Jenkins process was not run in a login shell, Git LFS didn't have permission to. The GitKraken Git Client is free for open source, early-stage startups and non-commercial use. Download this free Git client on Windows, Mac and Linux, and join leading companies like Google, Microsoft, Apple, Amazon and more. Visual Studio for Mac.NET. C. Azure DevOps. Azure DevOps Server TFS 0. Is git-lfs supported on TFS 2017?. using git lfs extension fails with TF400733: The request has been canceled: Client disconnected. 1 Solution GIT push fails 401 fatal - Repository GIT with Git TFS 2017 upd1 0 Solution. Now that you have downloaded Git, it's time to start using it. Read the Book. Dive into the Pro Git book and learn at your own pace. Download a GUI. Several free and commercial GUI tools are available for the Windows platform. Get Involved. A knowledgeable Git. After you download and install Git LFS, you can start managing large files in a Git repository by running git lfs track Git LFS v2.5.0 comes with three new migration modes, a handful of bug fixes, and more. Download Git LFS v2.5.0. New migration modes. With v2.5.0, you can use the git lfs migrate command in a few new ways. Sometimes repositories can get into a broken state when large files that should have been committed with Git LFS aren’t. 02/01/2019 · Watch this quick Git tutorial video to learn what Git LFS is, and how to save space when working with binary files. Subscribe to our channel for more videos like this, and download GitKraken for. Now that you have downloaded Git, it's time to start using it. Read the Book. Dive into the Pro Git book and learn at your own pace. Download a GUI. Several free and commercial GUI tools are available for the Mac platform. Get Involved. A knowledgeable Git community is available to answer your questions. 原文Git大文件存储 Linux macOS 视窗 Git LFS是一个用Git管理大文件的命令行扩展和规范。客户端是用Go编写的,预编译的二进制文件可用于Mac,Windows,Li. 博文 来自: geek word. Fork - a fast and friendly git client for Mac and Windows. About Us Blog Release Notes Home. Fork. a fast and friendly git client for Mac and Windows. Fork is getting better and better day after day and we are happy to share our results with you. Git-flow. Git LFS. About Us. 2. Install the Git LFS client locally. Each person who wants to use Git LFS needs to install the client on their local machine. They only need to do this once. Check that you have Git 1.8.2 or later using the git --version command. Install the Git LFS client: For Linux and Mac OS X, use a package manager to install git-lfs, or download from here. File Di Sblocco Frp Samsung J200f Trucchi Per Tablet Windows 8.1 Alternativa A Photoshop Kostenlos Anti Revoke Ios 12 Python Mongodb Orm Iptv Url Aggiornamento Automatico 2020 Verifica L'indirizzo Hotmail Dell'email Ripristinare Le Immagini Di IPhone Sparite Scarica Flash Ipad Air Download Download Di Mp3 Cutter E Joiner Nero Photosnap Viewer Per Windows 7 Caricatore Di Classi Figlio Java Qualcuno Ha Rubato La Mia Targa Temporanea Doulci Icloud Unlock Snapchat Di Simboli Del Segno Zodiacale Ripristina Iphone Se Il Passcode È Stato Dimenticato Driver Per Ati Mobility X1400 Windows 7 M Modelli Di W3schools Js Invia Email Smtp Trama Di Checker Uv Documento Di Contratto Di Prestito Personale Icona Della Moda Britannica Jython (nessun Oggetto Codice) Alla Riga 0 G Shock G-9200 Ms Chiave Del Prodotto Premium Di Casa 7 Gratuita Per Windows Odoo Installa Il Modulo Dalla Riga Di Comando Macbook Android Usb Pip Setup Ssl Programmi Di Laurea Cad Vicino A Me Clearview Pioneer 400 Kw Driver Bluetooth Thinkpad T460p Prompt Dei Comandi Win7 Startup Repair Crepa Cad Terra 2014 Wcw Nitro, 12 Ottobre 1998 Acrobat Dc Più Computer Configurazione Della Gestione Degli Eventi Di Servizio Python Esegue Il Comando Grep E Ottiene L'output Tribu Emirati Arabi Uniti
OPCFW_CODE
"""General functionality for crossover that doesn't apply. This collects Crossover stuff that doesn't deal with any specific type of crossover. """ # standard library import random # local stuff from Bio.GA.Organism import Organism class SafeFitnessCrossover(object): """Perform crossovers, but do not allow decreases in organism fitness. This doesn't actually do any crossover work, but instead relies on another class to do the crossover and just checks that newly created organisms do not have less fitness. This is useful for cases where crossovers can """ def __init__(self, actual_crossover, accept_less = 0.0): """Initialize to do safe crossovers. Arguments: o actual_crossover - A Crossover class which actually implements crossover functionality. o accept_less - A probability to accept crossovers which generate less fitness. This allows you to accept some crossovers which reduce fitness, but not all of them. """ self._crossover = actual_crossover self._accept_less_percent = accept_less self._accept_less_rand = random.Random() def do_crossover(self, org_1, org_2): """Perform a safe crossover between the two organism. """ new_org_1, new_org_2 = self._crossover.do_crossover(org_1, org_2) return_orgs = [] for start_org, new_org in ((org_1, new_org_1), (org_2, new_org_2)): new_org.recalculate_fitness() # if the starting organism has a better fitness, # keep it, minding the acceptance of less favorable change policy if start_org.fitness > new_org.fitness: accept_change = self._accept_less_rand.random() if accept_change <= self._accept_less_percent: return_orgs.append(new_org) else: return_orgs.append(start_org) else: return_orgs.append(new_org) assert len(return_orgs) == 2, "Should have two organisms to return." return return_orgs
STACK_EDU
||None- someone turned off the W&R I had on Well, this is (---)ing perfect, isn't it? And yes, the sarcasm is enough to choke a hippgriff. Deal with it, little bound bits of parchment, because I feel like ranting a bit, and you will serve the purpose well enough. Seeing as I can't really rant to anyone else at the moment (especially some people), you will have to do. Look at me, it's like I'm a (---)ing third year, using this thing for ranting my feelings about things. I'm sure I'll wake up and look at this and tear it out, so it's really just a waste of ink and time. All the same... (---) Lucius Malfoy, and (---) Narcissa Malfoy, and (----) Draco for- well, just because. It was my fault for being in there that night, I'll give him that, but I wouldn't have been at his (---)ing manor in the first place if it wasn't for him! I told him over and over that me spending Christmas over there was completely ridiculous and could only end horribly, and he just persisted! Then, when he hears that my (---)ing slacker parents decided to pack it off to Germany for the holiday and leave me at school, he has to go all bloody white knight and insist that I should spend the holiday at his house! And lo, all hell began to break loose upon the masses. Or at least, all hell began to break loose on the two of us. (---)! And (----), (---), (----), and every other word that disappears the second I write it. Effing censoring charms. This would be so much easier if I didn't have classes with him. I could just ignore that he ever existed, ignore that I was ever stupid enough to have had anything more to do with him than your average Slytherin "friendship", and it would all be nice and cold again. But no, he has to be there in the common room, so I can't sit by the fire; he has to be in most my classes, so I can't relax in the least and just nod off like I usually do; he has to be every-(---)ing-where. It's like he does it on purpose, and I wouldn't put it past him at this point. I know him, more than most people would know him, and he would do something like that. I could just be acting as paranoid as a blinkin' Hufflepuff, though. I need to see M. I need to... I need to talk to him, to do something, and get my mind off of Draco. I need to fly- it's late enough now that I could, if I wasn't completely freaked that I might not be able to change back- I just haven't been sleeping as well lately. It's cold, but I don't care so much. It's much (---)ing colder in here.
OPCFW_CODE
SAS Token error: Manage claim is required for this operation - Notification Hubs Im getting <Error><Code>401</Code><Detail>Manage claim is required for this operation..TrackingId:1745cb27-0fda-4c6c-8f7b-ee2d6868ac96_G6,TimeStamp:6/27/2017 11:32:19 AM</Detail></Error> whenever i try and connect to a notification hub using a SAS key. Ive tried regenerating the key, tried a SAS Key from a different Notification Hub, tried creating a separate key, tried a key from a different Azure subscription, tried running the explorer from a different PC. Same result.... am i missing something obvious? I wasn't able to connect with a connection string for Basic tier. Multiple Microsoft.ServiceBus.Messaging.MessagingException exceptions, but that's it. @paolosalvatori any ideas? It seems to work on an old version of Service Bus Explorer 3.4.2. But in that version the hubs dont load at first I have to manually refresh entities on the notification hubs entity to get them to show...but it does work. Not sure why the latest version throws the 401? Hi @gorillapower, could you please send me the connection string offline? You can find my email address in the about form, This way I can debug the code and find the culprit. Thanks! @paolosalvatori thank you! I have sent a mail. Ok, I solved the issue, but I need more time to make other changes to the code. :) So expect a new release soon that will fix the issue. :) In a branch, right @paolosalvatori? 😃 @paolosalvatori https://github.com/paolosalvatori Thanks!!! On 29 June 2017 at 18:27, Sean Feldman<EMAIL_ADDRESS>wrote: In a branch, right @paolosalvatori https://github.com/paolosalvatori? 😃 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/paolosalvatori/ServiceBusExplorer/issues/128#issuecomment-312020221, or mute the thread https://github.com/notifications/unsubscribe-auth/AId72YtvWVwhUpernBQ0xDpP9cNHawVYks5sI9CCgaJpZM4OGgRY . Yes @SeanFeldman :) I never thought that someone else could make the rules on my project :) Ah ah ah... but yes, it's a good practice :) Not rules. Recommendations 😉 @SeanFeldman I'm joking... it's all fun :) :+1: @gorillapower try to get the latest version. Also, I suggest you to select only Notification Hubs from the dropdownlist when connecting to a Notification Hub namespace. In fact, the tool cannot determine the type of namespace and it tries to retrieve all the entities (queue, topics, relays, event hubs, notification hubs) when connecting to a namespace. In order to understand the namespace type, the tool should use an AAD user to retrieve this information from the namespace resource. But this would require creating an Application on a tenant of the user AAD, and... too complicated :) In case anyone ends up here when searching for this error, the solution for me was to use the connection string from the Notification Hub Namespace and not the Notification Hub when copying it from the Azure Portal. As @ErikMogensen said, it works with the RootManageSharedAccessKey :-) Thanks for this amazing tool 🥇 Thanks for the positive feedback @cirino-gomez :)
GITHUB_ARCHIVE
The previous posts in this series have stepped through how to enable NSX and get some logical switches configured. Workloads now have L2 adjacency across IP subnets thanks to VXLAN logical switch overlays. It is time for routing. This post is building a three-tier application with logical isolation provided by network segments, routing and firewall rules. Later we will build a micro segment. Within the Networking and Security plugin select NSX edge. Click the green plus. As discussed in the NSX compendium, the installation and configuration of a Logical Distributed router installs what is known as a control virtual appliance. The control virtual appliance builds the control plane and manages OSPF adjacency and events for example. It is not in the data path. Select the Logical (Distributed) Router from the radial menu and fill a name in. Click Next. Set an administrative password and username. Choose Enable SSH access if you desire. Allocate which datastore you are going to store this on. Now it is time to create the interfaces of our Logical Router. Click the Green plus on the add interface page then select what the logical router will connect to. Next specify the gateway address of the subnet. This interface is analogous to a Switched Virtual Interface (SVI) or a Routed Virtual Interface (RVI). This is the default gateway for the subnet. Populate the address range and subnet prefix. With that you can accept the changes. Note that southbound interfaces that connect to logical switches with workloads on them are generally internal. Northbound interfaces are where connectivity to an upstream subnet is made and this is an uplink. With the interface created repeat it for the interfaces of the Application Tier and Database Tier. When completed select finish and deploy. Note that the logical router control VM is deploying. It will leverage the VIB installed at host preparation time for logical in kernel routing. The control VM is dedicated to managing Logical interfaces (SVI/RVI) interactions and distribution of current information. It is possible to have multiple logical routers per host. This allow tenant isolation or application isolation. Combine this with controlled transport zone for control planes and you have distinct segregation. One this has deployed you will have in kernel routing. Now look to test Web to App tier connectivty. The logical router is super easy to deploy and delivers optimised application traffic flows. Not having to route out to a core switch or aggregation gateway makes administration and troubleshooting easier than ever. 3 thoughts on “Installing VMware NSX – Part 5” Hi Anthony, very good blog series for NSX. I have a quick question, is NSX only using VXLAN for overlay protocol? How about the Nicira STT, is that protocol got abandoned after the merge (NVP ->NSX)? I see VMware co-authored Geneve draft, when will NSX start to support Geneve? Thanks for the comment. NSX uses VXLAN for the overlay protocol. Nicira/VMware STT is still used on NSX for Multi-hypervisor edition and its placement is host-to-host encapsulation/communication. I cannot comment on futures regarding Geneve and when NSX will/will not support Geneve! Sorry 🙂
OPCFW_CODE
Is there a reason this is labeled as 'Finish to Finish' even though 'Finish to Start' makes more sense? Got this question in my course: You are managing a software project. Your QA manager tells you that you need to plan to have her team start their test planning activity so that it finishes just before testing begins. But other than that, she says it can start as late in the project as necessary. What’s the relationship between the test planning activity and the testing activity? A. Start-to-Start (SS) B. Start-to-Finish (SF) C. Finish-to-Start (FS) D. Finish-to-Finish (FF) (Correct) It labels FF as correct, but I'm not seeing why given it says to finish planning activity before testing begins. Hi Rev, welcome. What leads you to say FS makes more sense? @TiagoMartinsPeres because it says the test planning activity should finish before the testing begins. What’s the relationship between the test planning activity and the testing activity? The answer should be: C. Finish-to-Start (FS) The test planning activity needs to finish, in order to start the testing activity. The highlighted test response might be wrong. It happens. [Warning: Possible bike-shedding ahead!] I like your answer because it seems intuitive, but in the real world test planning doesn't actually have to be completed before testing starts. JIT or just-enough planning is an agile staple, and while FS seems like the right answer to the test question, the underlying question is about why the "correct" answer might actually be correct. I definitely prefer your answer to the test designer's, but want to highlight the differences between package scheduling and dependency mapping, which I think your answer conflates (albeit for good reason). @ToddA.Jacobs: I agree with what you are saying, but... :) If you carefully read the details in the question, you can ignore everything except this: start their test planning activity so that it finishes just before testing begins. To me this is the definition of Finish-to-Start. As for why the other answer might be correct... I don't think it is. I think it's in fact an error in the test. And now I actually like your answer of "Ask your instructor" :) Ask Your Instructor The problem with tests, especially academic ones, is that only the test developer knows why they think a given answer is correct. In a school context, the only way to know for sure is to ask your instructor. Analysis All the terms you can choose relate to how activities are measured, rather than defining how the two tasks interrelate from a dependency/successor standpoint. That may be part of the problem with the test question. At first glance, finish-to-start seems right to me too. My assumption is that the completion of planning is an essential prerequisite to starting the actual testing. That also seems to align with what the QA manager is suggesting. With that said, a possible reason for selecting finish-to-finish is if you are measuring for “as late as possible” in a project, where one work package needs to be completed before another package can finish, but where the start of each package is neither tightly-coupled nor intrinsic to the relationship. In such cases, a prerequisite package must be finished before its dependent package can finish too. That doesn’t sound like the use case you described, but there may be additional context in the exam or class lectures that wasn’t included in your question. As one example of how context matters, if test planning is allocated exactly one week, and the testing itself has a fixed cycle time or due date, then you might want to work backwards to define the schedule. Scheduling the finish of a fixed quantity of test planning as late as possible based on the required finish date of a fixed quantity of testing might then make sense from a scheduling perspective, even if it seems counterintuitive from a dependency-mapping viewpoint. Scheduling and dependency mapping are related, but aren’t interchangeable. While finish-to-start makes a lot of sense for immediate follow-on activities, finish-to-finish measurements are often useful in scheduling deliberate delays or postponing dependencies. That may be what your instructor has in mind here. Addressing Exam Errors Of course, the question text could contain errors or omissions. The answer key could also simply be wrong, either through error or faulty test design. In an academic setting, ask your instructor to clarify the reason for the selected answer. For a normed exam, poor quality or beta questions will hopefully be factored out if the scoring, or at least have minimal impact, even if the exam format doesn’t allow you to dispute incorrect or flawed items.
STACK_EXCHANGE
In Programming Land, there are several pathways called Philosopher’s Walks for philosophers to have a rest. A Philosopher’s Walk is a pathway in a square-shaped region with plenty of woods. The woods are helpful for philosophers to think, but they planted so densely like a maze that they lost their ways in the maze of woods of a Philosopher’s Walk. Fortunately, the structures of all Philosopher’s Walks are similar; the structure of a Philosopher’s Walk is designed and constructed according to the same rule in a 2k meter square. The rule for designing the pathway is to take a right-turn in 90 degrees after every 1-meter step when k is 1, and the bigger one for which the integer k is greater than 1 is built up using four sub-pathways with k - 1 in a fractal style. Figure F.1 shows three Philosopher’s Walks for which k is 1, 2, and 3. The Philosopher’s Walk W2 consists of four W1 structures with the lower-left and the lower-right ones are 90 degree rotated clockwise and counter-clockwise, respectively; the upper ones have the same structure with W1. The same is true for any Wk for which the integer k is greater than 1. This rule has been devised by a mathematical philosopher David Hilbert (1862 – 1943), and the resulting pathway is usually called a HILBERT CURVE named after him. He once talked about a space filling method using this kind of curve to fill up a square with 2k sides, and every Philosophers’ Walk is designed according to this method. Figure F.1. Three Philosopher's Walks with size (a) 21 = 2, (b) 22 = 4, and (c) 23 = 8, repectively. Tae-Cheon is in charge of the rescue of the philosophers lost in Philosopher’s Walks using a hot air balloon. Fortunately, every lost philosopher can report Tae-Cheon the number of meter steps he has taken, and Tae-Cheon knows the length of a side of the square of the Philosopher’s Walk. He has to identify the location of the lost philosopher, the (x,y) coordinates assuming that the Philosopher’s Walk is placed in the 1st quadrant of a Cartesian plain with one meter unit length. Assume that the coordinate of the lower-left corner block is (1,1). The entrance of a Philosopher’s Walk is always at (1,1) and the exit is always (n,1) where n is the length of a side. Also assume that the philosopher have walked one meter step when he is in the entrance, and that he always go forward to the exit without back steps. For example, if the number of meter-steps taken by a lost philosopher in the Philosopher’s Walk in W2 in Figure F.1(b) is 10, your program should report (3,4). Your mission is to write a program to help Tae-Cheon by making a program reporting the location of the lost philosopher given the number of meter-steps he has taken and the length of a side of the square of the Philosopher’s Walk. Hurry! A philosopher urgently needs your help. Your program is to read from standard input. The input consists of a single line containing two positive integers, n and m, representing the length of a side of the square of the Philosopher’s Walk and the number of meter-steps taken by the lost philosopher, respectively, where n = 2k and m ≤ 22k for an integer k satisfying 0 < k ≤ 15. Your program is to write to standard output. The single output line should contain two integers, x and y, separated by a space, where (x,y) is the location of the lost philosopher in the given Philosopher’s Walk. 힐베르트 커브의 방문 순서가 주어질 때, 그 정점을 묻는 문제로 처음 보면 조금 당황할 수 있다. 도형의 규칙을 잘 찾아보면 되는데 <왼쪽 위, 오른쪽 위의 경우> Base Case(n=2)와 같은 방문 순서대로 풀어주면 된다 <왼쪽 하단의 경우> Base Case의 경우를 y = x직선에 대칭한 것이다. (x', y') = (y, x)가 된다. <오른쪽 하단의 경우> Base Case의 경우를 먼저 y = -x 직선에 대칭시킨다. (x', y') = (-y, -x) Base Case를 기준으로 보면, 변형된 x좌표의 경우, 맨 오른쪽에서 빼므로 현재 보는 사각형을 감싸는 큰 사각형의 가로길이(W) - y + 1을 해주면 된다(좌표가 1,1부터 시작함) 변형된 y좌표의 경우, 중간 선부터 한칸 내려가므로 큰 사각형의 절반에서 -x+1을 해주면 된다, 즉 정리하면 (x,y) -> (2w-y+1, h-w+1)로 이동한다고 볼 수 있다. 이를 재귀적으로 풀어주면 O(log n)에 정답을 구할 수 있다.
OPCFW_CODE
import sys,csv,time,calendar from math import log,sqrt import numpy as np from scipy.optimize import minimize from scipy.special import gammaln def datetoday(x): t=time.strptime(x+'UTC','%Y-%m-%d%Z') return calendar.timegm(t)//86400 def daytodate(r): t=time.gmtime(r*86400) return time.strftime('%Y-%m-%d',t) # wget 'https://api.coronavirus.data.gov.uk/v2/data?areaType=nation&areaCode=E92000001&metric=newCasesLFDConfirmedPCRBySpecimenDate&metric=newCasesLFDOnlyBySpecimenDate&metric=newLFDTests&metric=newCasesBySpecimenDate&format=csv' -O engcasesbyspecimen.csv # wget 'https://api.coronavirus.data.gov.uk/v2/data?areaType=overview&metric=newCasesByPublishDate&format=csv' -O casesbypublication.csv # wget 'https://api.coronavirus.data.gov.uk/v2/data?areaType=nation&areaName=England&metric=maleCases&metric=femaleCases&format=csv' -O casesbyage.csv def loadcsv(fn,keephalfterm=True): dd={} with open(fn,"r") as fp: reader=csv.reader(fp) headings=[x.strip() for x in next(reader)] for row in reader: if keephalfterm or row[0]!='2021-02-17': for (name,x) in zip(headings,row): x=x.strip() if x.isdigit(): x=int(x) dd.setdefault(name,[]).append(x) return dd print("Using LFD school numbers from table 6 of https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/968462/tests_conducted_2021_03_11.ods") dd=loadcsv("LFDschooltests.csv",keephalfterm=True) #cases=dd["Cases"] # cc=loadcsv("engcasesbyspecimen.csv") # datetocases=dict(zip(cc['date'],cc['newCasesBySpecimenDate'])) # print("Using England cases by specimen date") # cc=loadcsv("casesbypublication.csv") # datetocases=dict(zip(cc['date'],cc['newCasesByPublishDate'])) # print("Using UK cases by publication date from https://api.coronavirus.data.gov.uk/v2/data?areaType=overview&metric=newCasesByPublishDate&format=csv") # print() cc=loadcsv("casesbyage.csv") cumdatetocases={} for (date,metric,age,value) in zip(cc['date'],cc['metric'],cc['age'],cc['value']): if age=="10_to_14" or age=="15_to_19": cumdatetocases[date]=cumdatetocases.get(date,0)+value print("Using England cases by age from https://api.coronavirus.data.gov.uk/v2/data?areaType=nation&areaName=England&metric=maleCases&metric=femaleCases&format=csv") print() # Work out constant part of negative binomial denominator. It doesn't affect the optimisation, but it makes the log likelihood output more meaningful LL0=sum(gammaln(a+1) for a in dd['LFDpos']) # SLSQP seems to be happier if the variables we are optimising over are of order 1, so use scale factors scale0=1e4 population=6.3e6# Estimated population of age 10-19 in England scale2=1e3 # Return -log(likelihood) assuming negative binomial distribution with common dispersion parameter r=xx[2]*scale2 def err(xx): r=xx[2]*scale2 n=len(cases) LL=-n*gammaln(r) for (a,b,c) in zip(dd['LFDpos'],dd['LFDnum'],cases): lam=(xx[0]/scale0+xx[1]*c/population)*b p=lam/(lam+r) LL+=gammaln(r+a)+r*log(1-p)+a*log(p) return LL0-LL # Return negative second derivative of log(likelihood) wrt xx[0]/scale0 (aka observed Fisher information) def LL2(xx): eps=1e-3 e1=err(xx) while 1: e2=err([xx[0]+eps*scale0]+list(xx[1:])) if abs(e2-e1)<1e-3*(e1+e2): break eps/=2 e0=err([xx[0]-eps*scale0]+list(xx[1:])) return (e0-2*e1+e2)/eps**2 best=(-1e9,) for offset in range(-15,7): cases=[] for date in dd["WeekEnding"]: day=datetoday(date) #cases.append(sum(datetocases[daytodate(d)] for d in range(day+offset-6,day+offset+1))) cases.append(cumdatetocases[daytodate(day+offset)]-cumdatetocases[daytodate(day+offset-7)]) if 0:# Sensitivity check, to check that the correlation between LFDpos and cases is not due to LFDpos feeding directly into cases for i in range(len(cases)): cases[i]-=dd['LFDpos'][i] res=minimize(err,[1,1,1],method="SLSQP",bounds=[(1e-9,scale0),(1e-9,100),(0.01,100)],options={"maxiter":1000}) if not res.success: raise RuntimeError(res.message) LL=-res.fun xx=res.x fisher=LL2(xx) print("Offset %3d. Log likelihood = %8.3f. al = %6.4f%% . be = %8.3f . r = %8.3g"%((offset,LL,xx[0]/scale0*100,xx[1],xx[2]*scale2))) if LL>best[0]: best=(LL,offset,res,fisher,cases) print() (LL,offset,res,fisher,cases)=best print("Best offset %d. Log likelihood = %g"%(offset,LL)) date=dd["WeekEnding"][-1] day=datetoday(date)-6 print("Offset %d means LFD numbers for week %s - %s are related to national case numbers in the week %s - %s"%(offset,daytodate(day),date,daytodate(day+offset),daytodate(day+offset+6))) xx=res.x fpr=xx[0]/scale0*100 error=1.96/sqrt(fisher)*100 print("Best estimate: LFDpos/LFDnum = %.3g + %.3g*(weekly case rate)"%(xx[0]/scale0,xx[1])) print(" where (weekly case rate) = (number of confirmed cases in a week) / %g"%population) print("False positive rate estimate: %.2g%% (%.2g%% - %.2g%%)"%(fpr,fpr-error,fpr+error)) with open("graph","w") as fp: for (pos,num,ncases) in zip(dd['LFDpos'],dd['LFDnum'],cases): print(pos,num,ncases,1.96*sqrt(pos),file=fp)
STACK_EDU
Sold and Shipped by, haoztec, purchases from these Sellers are generally covered under our.The layout of the keys is mxkey setup 3.5 rev 2.5 presented in a classic American (US ansi) version, but with a two-level Enter in accordance with ISO.In particular this applies to creating personal "hot keys" andRead more 1, by the end of windows media player 12 for windows 8.1 64 bit 2015, India had the fourth largest installed wind power capacity in the world.Wind Energy Industry in India looks to have a bleak future and will mainly become a technology buyer with small production bases established byRead more "Like pro consoles, Program4Pc DJ Music Maker has dual control suites, one for each deck, with starcraft 2 wings of liberty map editor both offering variable pitch, looping, cueing, and multiple crossfade options, plus effects and recording.".Download now, buy now, features overview, edit video files without reconversion.Download, we Guarantee, latestRead more Vmware server 2 convert esxi 2) Using Converter.3. Local installation logon using " root " using conversion feature error message. I guess i'll just shut the vm down, copy the image folders from the virtual machine directory to another folder, run the converter on the live vmdk's. . I tried 2 things: 1) Using vShere Client.0 to add the VM as an inventory item through the datastore browser.I didn't want to do that originally because i didn't want to lose the original vmware server vmdk's incase something went wrong in the conversion. .Any solution to this, or alternative to convert between these two types are welcome.However, you can login afterwards when you get through the conversion wizard.I have recently built an ESXi 5 server.Server installation logon using " root " using conversion feature error message.So I dota 2 beta key generator october 2012 tried that. I tried using the VMware Converter Standalone, but when I get to the end of the conversion settings (right before it begins the conversion, after clicking finish I get an error "A General system error occured: invalid fault". I was told one day it's due to me using the free ESXi. I found a workaround: VMware Workstation Export feature. .Server installation logon using a domain admin using conversion feature error message.We have been running VMware Server.0 for many years and recently got new hardware to build a server for ESXi.0 (free).How do you migrate a VM from VMware Server.0 to ESXi.0?Does anyone know a better way to convert from VMware Server to ESXi?I have some old VMware Server.0.1 (I believe) Virtual Machines which I would now like to see converted, and run off my new ESXi server.I did File - Import or Export - (follow the wizard).Thanks everyone for your input.It allows me back into the settings menu, and when I go back one page (to where all the network settings, and data to copy/disk options are) I get a yellow "!" caution triangle on the advanced options, and up top it tells me "Warning.The hardware and software installation process went very smoothly however, I still haven't found documentation on how to migrate VMs to ESXi.0.
OPCFW_CODE
"Unconscious" versus "nonconscious" in everyday dialogue These words have subtle distinctions in related research fields, but even there are often considered interchangeable or just an matter of tradition/trendiness in a particular field. Since I am a bit entrenched in that environment, I don't have perspective on what is common outside of academia. In normal, everyday conversation, which is more common? If they are both used, do they have commonly understood distinctions in meaning? From what I know, unconscious stands for something, or better yet someone who is usually conscious, but currently temporarily unconscious. For example when you hit your head and fall down unconscious. After the accident I was unconscious for hours. On the other hand, nonconscious, which isn't used very often, stands for something what's never conscious. Like a plant, a tree, or for the sake of the argument, a rock. In these horrible winter temperatures the trees are lucky to be nonconscious. ... and that leaves "subconscious" ... :) I don't think you can mix up that one with any of those in my answer, can you? ha, no. Just pointing it out as filling in the next relevant definition (advanced cognitive processing that occurs automatically without awareness). However, I do think that "unconscious" is often used for all three definitions, and "nonconscious" for both of the last two. ... which was the meaning I originally had in mind when I posted this question. I am laughing, because apparently, it applies to neither. I'm not sure if you can meaningfully talk about trees being "lucky" in the same sentence as pointing out that they're not even sentient enough to feel their own pain. @FumbleFingers: I think I can call them lucky from my own perspective without their feeling the same, but that's a rather philosophical question. Hmm. Could rocks not also be "lucky"? Or only things that are alive? What about a comet that "luckily" doesn't collide with anything? @FumbleFingers I don't mean to argue, I mean c'mon, you're 3 times my age and you have a degree in linguistics. I don't think I could tell you what's right and what's wrong, but I really believe this works. If luck is defined as "an event or a series of events happened by chance", then yeah, either believe in God and say that trees were created on purpose by a higher being, therefore destroy the word "luck" completely, or be an atheist and state that "what happens in the world is a series of events defined by chance," or whatever. And here comes the philosophy again. Ah, I'm just yanking your chain! :) Of course you're right that you can speak of nonsentient or inanimate things being "lucky". But the less self-awareness/personal identity that thing has, the more I think it's metaphorical usage. Where as you say, "luck" is from the speaker's perspective, not the thing being spoken of. I wasn't meaning to be contentious either - just trying to flag up that the meaning of "lucky" shifts according to the referent. And - as you rightly point out - it may also reflect the speaker's philosophical stance anyway. Nonconcious pretty much flatlines against unconcious in Google NGrams, as one might expect. I'm not going to present any particular semantic distinctions. I'm sure there will be conflicting distinctions, and personally I don't think the word nonconscious has sufficient currency to be worth defining anyway, outside of specialised contexts where it's effectively "trade jargon" to avoid the connotations of unconscious/subconscious. Nonconscious is the preferred term in many (most?) modern academic contexts. You are right, I did not use Google NGrams, I just did a basic Google search and read the Wikipedia page. Thank you for pulling this up. @Angada: I'm not an academic in any of the modern fields that would use it, but I think in general nonconcious simply means lacking any element of conciousness in those contexts. A thing that never lived or thought could be nonconcious, just as it could be nonliving (though we're more likely to hyphenate that one). that is helpful insight, thank you. I would never have thought of that, as, in my field, it means what I now think most would call "subconscious." But it makes sense. I knew I was out of touch with common usage. @Angada: I know subconscious isn't normally used in fields like cognitive psychology these days, partly because it's a vague term with unwanted connotations from a largely-discredited past. A layman might think of the subconscious as some kind of "inner persona" that generates dreams, for example, but I don't think any serious researcher would want those connotations. yes, exactly. So, what word do you think would best communicate to a layperson the automatic environmental processing that occurs in the brain outside of conscious awareness? I academia, I would say nonconscious. But apparently that mean "never conscious" in the real world. This is exactly the point of my question. I think nonconscious, unconscious, and subconscious all mean the same thing: mental processes that people are not consciously aware of. According to Freud's Topology of Mind, there are three levels of mind: The Conscious Mind (reality), The Preconscious Mind (Morality, our beliefs, values, and ideals), and The Subconscious Mind (Pleasure, what people want: food, sex, possessions, power, etc.). The preconscious can be brought to the consious mind easily, but the subconscious cannot. We are completely unaware of the subconsious. For instance, habits are completely subconsious automatic processes that are guided by beliefs, expectations, and rewards put in place by our society. Nonconscious, unconscious, or subconscious would be correct to use — but personally, I think subconscious is best because it suggests we are still affected at this level of mind, where unconscious or nonconscious suggest we cannot be influenced by our surroundings. Nonconscious is a cognitive term, and is unawareness. Unconscious is a psychoanalytic-Freudian term, mainly emotional. Casualty departments deal with unconscious patients all the time, and don't believe Freud invented the term.
STACK_EXCHANGE
// // cardManager.cpp // Card // // Created by Harsha Srikara on 12/24/18. // Copyright © 2018 Harsha Srikara. All rights reserved. // #include "cardManager.h" #include <iostream> // constructor cardManager::cardManager() { insert(sortedDeck,"spades"); insert(sortedDeck,"clubs"); insert(sortedDeck,"diamonds"); insert(sortedDeck,"hearts"); //randomization process currentDeck = sortedDeck; currentDeck = randomize(currentDeck); } void cardManager::insert(std::vector<card *> &deck,std::string suitType) { for(int i = 1; i < 14; i++) { card * newCard = new card(suitType,i); deck.push_back(newCard); } } //destructor cardManager::~cardManager() { destroy(); } //deletes all the card pointers //this code will need to eventually be brought back once the pointer issues have been resolved void cardManager::destroy() { //std::cout<<"deleting sorted deck"<<std::endl; deleteDeck(sortedDeck); } void cardManager::deleteDeck(std::vector<card*> &deck) { for(int i = (int)deck.size()-1; i > -1; i--) { if(deck[i]!=nullptr) { //std::cout<<i<<" "<<deck[i]<<" "<<*deck[i]<<" "; delete deck[i]; deck[i]=nullptr; //std::cout<<deck[i]<<std::endl; deck.pop_back(); } } } //deck modifiers void cardManager::generateDrawPile() { drawPile = randomize(); } void cardManager::addDeck() { insert(sortedDeck,"spades"); insert(sortedDeck,"clubs"); insert(sortedDeck,"diamonds"); insert(sortedDeck,"hearts"); drawPile = sortedDeck; drawPile = randomize(drawPile); } //getters const std::vector<card *>& cardManager::getSortedDeck() const { return sortedDeck; } const std::vector<card *>& cardManager::getCurrentDeck() const { return currentDeck; } const std::vector<card *>& cardManager::getDrawPile() const { return drawPile; } const std::vector<card *>& cardManager::getDiscardPile() const { return discardPile; } //getCard functions card* cardManager::drawCard() { card * temp = drawPile[drawPile.size()-1]; drawPile.pop_back(); return temp; } card* cardManager::drawDiscardPile() { card * temp = discardPile[discardPile.size()-1]; // discardPile[discardPile.size()-1] = nullptr; discardPile.pop_back(); return temp; } void cardManager::addToDiscardPile(card* &newCard) { discardPile.push_back(newCard); } card*& cardManager::getTopDiscard() { return discardPile[discardPile.size()-1]; } //randomize functions std::vector<card *>& cardManager::randomize() { unsigned seed = (int)std::chrono::system_clock::now() .time_since_epoch() .count(); shuffle (currentDeck.begin(), currentDeck.end(), std::default_random_engine(seed)); return currentDeck; } std::vector<card *>& cardManager::randomize(std::vector<card *> &deck) { unsigned seed = (int)std::chrono::system_clock::now() .time_since_epoch() .count(); shuffle (deck.begin(), deck.end(), std::default_random_engine(seed)); return deck; } //calls print with cout void cardManager::print() { print(sortedDeck, std::cout); } //calls print on the deck std::ostream& cardManager::print(const std::vector<card *> deck, std::ostream &out) { for(int i = 0; i < deck.size(); i++) { out<<i<<" - "<<*deck[i]; } return out; } //calls print on the deck - const version std::ostream& cardManager::print(const std::vector<card *> deck, std::ostream &out) const { for(int i = 0; i < deck.size(); i++) { out<<i<<" - "<<*deck[i]; } return out; } //operator overload std::ostream& operator<<(std::ostream &out, const cardManager &entry) { entry.print(entry.getDrawPile(), out); return out; }
STACK_EDU
Display of Refundable Taxes at Time of Shopping The Enhanced and Simplified Distribution messages support the transmission of specific Tax and a Tax Summary information. As per the references, this is required for some jurisdictions, the Airline may advise the total amount of the refundable taxes/charges to the Seller. The tax name, code and country along with the refundable indicator can be disclosed to the traveler before purchase. Signifying a Tax is Refundable To signify that a specific tax is refundable, the Airline may identify the unique tax using the Country Code, Tax Code and Tax Name elements and by setting the refundability indicator, Refund Ind, to true. <TaxSummary> <Tax> <Amount CurCode="CHF">1.00</Amount> <ApproximateInd>false</ApproximateInd> <Country> <CountryCode>CH</CountryCode> <CountryName>Country Name</CountryName> </Country> <RefundInd>true</RefundInd> <TaxCode>Country Code</TaxCode> <TaxName>Name of Tax</TaxName> <TaxTypeCode>Applied</TaxTypeCode> </Tax> </TaxSummary> Summarizing the Refundability of Taxes The Tax Summary element contains the Refund Method Text and Total Refund Tax Amount which summurises the tax’s that are refundable. In the below example, there are two taxes (assumed to be both with 1.00 CHF, one being refundable and one not). The Summary shows a Total Refund Tax Amount of 1.00 CHF and a Total Tax Amount of 2.00 CHF. <TaxSummary> <AllRefundableInd>false</AllRefundableInd> <ApproximateInd>false</ApproximateInd> <Tax>...</Tax> <Tax>...</Tax> <RefundMethodText>(for e.g.) Taxes are automatically refunded to the original form of payment.</RefundMethodText> <TotalRefundTaxAmount CurCode="CHF">1.00</TotalRefundTaxAmount> <TotalTaxAmount CurCode="CHF">2.00</TotalTaxAmount> </TaxSummary> The Refund Method Text element may be used to advise the Seller how taxes are refunded. For example, a message returned advising taxes are automatically refunded to original form of payment, or for more information visit the airline.com’s website: <TaxSummary> <RefundMethodText>(for e.g.) Taxes are automatically refunded to the original form of payment.</RefundMethodText> </TaxSummary> <TaxSummary> <RefundMethodText>(for e.g.) For more information on how to request a refund for taxes, please see http://.../taxinformation</RefundMethodText> </TaxSummary>
OPCFW_CODE
Are you looking for a thrilling career that promises endless possibilities for growth and development? Have you ever considered exploring the exciting world of software development? With the ever-growing demand for technology, software development is rapidly becoming one of the most sought-after career paths. From creating complex programs to developing innovative applications, software development offers a rewarding and dynamic career for those who are passionate about technology and love problem-solving. In this article, we will take a closer look at the world of software development, its potential for growth, and how you can unlock your full potential in this exciting industry. Join us as we dive into the fascinating world of software development and unlock your true potential. Unlock Your Potential: The Thrilling World of Software Development Software development can be a daunting field to pursue, but with the right mindset and tools, it can be the most rewarding and fulfilling career choice for anyone with a passion for technology. Unleash your potential as a software developer and join the thrilling world of building, testing, and deploying cutting-edge applications that change people’s lives. As a software developer, you get to work on a wide range of projects, from web and mobile apps to artificial intelligence and machine learning applications. You get to solve complex problems using code and programming languages, designing user interfaces that are intuitive and efficient, and collaborating with other developers on the team to bring your ideas to life. At the heart of software development is the ability to learn, adapt, and constantly improve. With every project comes new challenges, and with every challenge comes an opportunity to grow your skills. Whether you’re building your own open-source projects or contributing to a team effort, the act of coding is a continuous process of experimentation, iteration, and innovation. So if you’re looking to unlock your potential and explore the exciting world of software development, there’s never been a better time to get started. With a wealth of online resources, coding bootcamps, and community-driven meetups, you can join a network of passionate developers who are making a difference in the world, one line of code at a time. Discover the Exciting World of Programming The world of programming is a vast and exciting one, full of endless possibilities. It is a world that continues to expand rapidly, with new technologies and ideas being developed every day. As a programmer, you have the opportunity to create solutions to problems, build new applications, and innovate in ways that can change the world. One of the most exciting things about programming is the ability to work on projects that interest you. Whether it’s creating a mobile app, designing a website, or developing a game, programming allows you to turn your ideas into reality. The possibilities are endless, and the only limit is your imagination. Programming can also be incredibly rewarding. The feeling of satisfaction that comes from solving a tricky problem or building a working application from scratch is hard to beat. Plus, the demand for skilled programmers is always high, meaning that there are plenty of job opportunities out there for those with the right skills and experience. If you’re interested in exploring the exciting world of programming, there are plenty of resources available to help you get started. From online tutorials and forums to coding bootcamps and college courses, there are plenty of ways to learn the skills you need to succeed in this field. So why not start exploring today and see where the world of programming can take you? Explore the Vast Array of Career Opportunities in Software Development Career opportunities in software development span a vast array of different fields and specializations, from web development to machine learning and beyond. Whether you’re interested in honing your coding skills, learning new frameworks, or exploring the latest in AI and data analytics, you can find a career path that matches your interests and goals. Other popular fields in software development include mobile app development, gaming, data analytics, and machine learning. In mobile app development, you might create mobile applications for iOS or Android devices, while in gaming, you could work on everything from graphics and animations to game mechanics and story development. In data analytics, you would analyze large datasets to identify patterns and trends, while in machine learning, you would develop algorithms that enable machines and computer systems to learn and make decisions on their own. No matter what your specific interests and skill sets may be, there are plenty of pathways and opportunities to explore within software development. By expanding your horizons, learning new skills, and staying up-to-date with the latest trends and technologies, you can build a rewarding and fulfilling career in this exciting and ever-evolving field. Learn the Essential Tools and Techniques of Software Programming The world of software programming can be overwhelming for beginners, but learning the essential tools and techniques can help jumpstart your journey in the field. One important tool for software programmers is a programming language like Python, Java, or C++. Each language has its own syntax and structure, which needs to be mastered to write effective code. Understanding programming logic and algorithms also plays a crucial role in developing software applications. Another important tool for software programming is an integrated development environment (IDE), a software application that provides comprehensive facilities to computer programmers for software development. IDEs like Eclipse and Visual Studio help code editing, debugging, and testing, making the coding process more efficient. Learning how to use IDEs can speed up software development and improve workflow. Understanding version control systems is also essential in software programming. Git, for instance, is one of the most popular version control systems out there. It allows developers to manage the codebase of a software project and collaborate with others. By using version control, developers can keep track of every change made to the codebase and work collaboratively without conflicting with each other’s work. Lastly, learning code optimization techniques can make your software application run smoother, use fewer resources, and provide a seamless user experience. Improving code structure, optimizing algorithms, and reducing computational complexity can lead to better performance and functionality. In conclusion, mastering the essential tools and techniques of software programming can be daunting but essential for anyone looking to excel in the field. The knowledge of programming languages, IDEs, version control systems, and code optimization techniques can help you develop efficient, innovative, and high-performing software applications. Embrace the Promising Future of Software Development and Excel in Your Career Growing advancements in technology suggest a very promising future for software development. The software industry is constantly evolving, and it provides excellent opportunities for individuals to excel in their careers. Being a developer can be one of the most rewarding and fulfilling career choices. However, it is vital to embrace and adapt to new technological advancements in the industry in order to remain competitive. The field of software engineering is vast and includes several sectors such as web development, mobile app development, software architecture, and cloud computing. Whatever field you choose, it is crucial to learn new programming languages and tools and stay updated on industry trends to remain relevant. Also, being a software developer requires excellent problem-solving skills as you will often be responsible for designing, coding, and testing software applications to meet clients’ requirements. In today’s digital age, there is an increasing demand for software developers in almost every sector that requires software solutions. With diverse opportunities available, it is essential to begin by selecting a niche and specialize in a particular programming language. Whether you decide to be a front-end developer or a backend developer, there are many resources available to enable you to excel in your specialized niche. With the right attitude and mindset, an individual can grab opportunities and build a successful career as a software developer. In conclusion, it is an excellent time to . Keep exploring new programming languages and tools and stay updated on industry trends to remain competitive. Don’t be afraid to specialize and be the best in your niche. With the right skillset and mindset, the opportunities are limitless for a bright future in the industry. It is a journey that can be challenging but yet rewarding in the long run. We hope that this article has given you a glimpse into the exciting world of software development! Whether you’re a tech enthusiast or someone with a passion for problem-solving, the potential for growth and creativity is truly limitless in this field. With the right mindset and a willingness to learn, anyone can unlock their potential and embark on a thrilling journey in software development. So why not take the plunge and see where it takes you? Who knows, you might just surprise yourself with what you can create and achieve! - About the Author - Latest Posts Hi there! I’m Cindy Cain, a writer for Digital Louisiana News. I’m a native of the Bayou State, and I’m passionate about sharing the stories of my home state with the world. I’ve always loved writing, and I’m lucky enough to have turned my passion into a career. I’ve worked as a journalist for over 10 years, and I’ve had the opportunity to cover a wide range of stories, from politics and crime to food and culture. I’m especially interested in telling the stories of people who might not otherwise be heard. I believe that everyone has a story to tell, and I’m committed to using my writing to give a voice to those who might not otherwise have one.
OPCFW_CODE
Download online ticket booking system project source code in aspnet and project report with pptonline ticket booking system uml diagrams data flow diagrams. Development of an online bus ticket reservation system for a the online bus ticket reservation system is a web-based application that allows report generation. A project presentation on online bus booking system - online bus booking system in this module user can post the comment on a particular bus report generation. Bus reservation system project report in asp online bus reservation system with full source code and database co online ticket booking system aspnet. In this project the bus reservation system & the other entire task on the bus system application is can done thought menu based program the other tasks are as. Bca mca btech projects topics list, project report c++ project bus reservation system project c++ project bus reservation system in code::blocks “bus. Hire a whole greyhound bus group bookings group travel of 5 or more commercial sales your confirmation number was emailed to you after you made an online booking. Bus reservation software travelcarma's online bus booking system allows you to automate your bus bookings and payments for different bus routes and intercity transfers. Bus portal development: axis softech provides best bus reservation system, bus booking engine, online bus booking system integration for bus operators. A project report on bus reservation system submitted in partial fulfillment for the award of degree of bachelor computer. Need to report the video bus reservation system & online bus ticket booking system - duration: 8:35 1 crore projects 14,590 views 8:35. Bus ticketing management system is the ticketing project for bus management developed using netget all documents like abstract,report and project code. Bus reservation system is an online bus booking system designed to automate the online ticket purchasing easy to install and use bus ticket reservation system. This website provide online bus ticket booking system project report in aspnet, java and php you can free download this project report for bus ticket booking system. A project report on bus reservation system submitted in partial fulfillment for the award of degree of post graduate diploma in information technology. Online ticket booking system project explains pls sir sent the online bus ticket reservation system in system mini project source code and report with. A entity relationship diagram showing bus reservation system you can edit this entity relationship diagram using creately diagramming tool and include in your report. Tsrtconlinein is a newly launched website for tsrtc advance online booking/reservation system book your tickets online at tsrtconlinein - telangana state road. About bus reservation system c++ project: basically four features are available in this project, but you can write your own code to add more features, and make this. This project online busticket booking leveraging its capabilities in the online ticket booking bus reservation system for main mca project so report format. This website provide online ticket booking system project report you can free download project report you can also use this project report for bus ticket booking. Bus reservation system project documentation online bus reservation a project report windows , project on bus reservation system see wpf sdk for developer. Login page of the website msrtc online reservation system asking for for username and password. Final year projects | online bus reservation more details: need to report the video bus reservation system & online bus ticket booking system. Free download php project bus reservation system with source code, database, presentation, project report for final year it engineering student. Hi everyone, i want to create bus online ticket reservation system in aspnet web application using c# i am partially developed my bus database but i had doubt and. Project on airline reservation system acknowledgement reservation: ticket report, pnr, flight code, destination place, source place, departure time. Online bus ticketreservation system (obtrs) student id : student504427 student: tuvshinbayar davaa aptech banaswadi, kalyan nagar.
OPCFW_CODE
Farsi / Persian NetLogo User Community Models ## WHAT IS IT? Nuclear reactor is a machine that produces heat using the nuclear fission. A nuclear reactor is the part responsible to generate steam in a nuclear power plant analogue with a power boiler which can burn oil, gas or other fuel to produce heat and steam. The steam produced in the reactor is used in one or more steam turbine that convert the mechanical energy in electrical energy. One of the benefits of this method over other power generation is that there are no emissions when compared with fossil burning boilers and there are no necessity to flood big areas such as in hydroelectric. Clear disadvantages are the high impact in case of accident, as the famous catastrophes of Chernobyl and Fukushima. Other challenge of this technology is that the part of the nuclear waste needs long time to decay the radioactivity to a level that is not dangerous for the life and environment. In this project I want to show how this technology works using an Agent Based Model to explain the process of a chain reaction, what are the elements used to control it and how can we improve the controllability of this process, using a Proportional Integral Derivative Controller (PID). ## HOW IT WORKS The whole process is based in a chain reaction. The process starts when a neutron is released, then the neutron has a constant speed and when it hits a fuel, it releases other neutrons and release also energy. The control rods are placed inside of the reactor to absorb the free neutrons and keep the reaction under control, avoiding that the numbers of neutrons and energy increases to a point where the reactor was not designed, this in real life would lead to an emergency shutdown. ## HOW TO USE IT In order to use the model you can define the amount of fuel, it has a range from 5 to 50. ## THINGS TO NOTICE See how the energy generation variability changes in manual mode and in automatic mode. ## THINGS TO TRY You can try to change the control-gain, the control-integral and the control-derivative and observe how each parameter has an impact in the process stability and in the average error between setpoint and real value ## EXTENDING THE MODEL There are other components that have an impact in the reaction that can be implemented in the future, e.g. better modelling of the cooling water, it could help in the fidelity of the model. the implementation of a turnine could be also interesting because of a addition of a dead time in the process that would make the process control more challenging for a PID controller ## NETLOGO FEATURES ## RELATED MODELS There are two model in Netlogo that show how a reactor works. one is the Reactor Top DOwn and the second is the Reactor X-Section ## CREDITS AND REFERENCES (back to the NetLogo User Community Models)
OPCFW_CODE
Module 3 : Enterprise Architecture and The Cloud With the advent of the cloud, there was an immediate worry about What happens to the existing Enterprise software and hardware? What happens to my existing apps? Do I need to re-architect everything for the cloud. The truth is that most enterprises are taking only baby steps towards Cloud Computing. e.g. Email, Salesforce, Web apps/sites, Office apps etc. The main reason for this is the initial skepticism about a new buzzword as well as lack of enterprise-readiness to adopt the Cloud Strategy head on. The main reason for the slow acceptance is mainly that enterprises are just not aware of their business challenge they want to tacke with the Cloud. The most appropriate step is to first understand you problem and seek the question to which the answer is Cloud. On the other hand, enterprises have already worked on modernization and rationalization of most of their legacy apps. Also they have already implemented a reusable modular architecture in their enterprise by the use of SOA (Service Oriented Architecture) principles. SOA Governance whose need is now being understood by many enterprises is an added winning advantage for their enterprise. Because we already have it, the SOA governance framework can be extended to govern cloud services. So is Cloud the end of enterprise software? Do enterprises really not need to buy hardware any more? The answer to this question is largely dependent of the business problem of the enterprise. If its a large telecom company or a bank , doing away with entire enterprise software or hardware makes little sense. However, for college graduates who are looking at establishing their own startups, the need for spending a lot of money on the hardware is virtually eliminated. They can rent a PaaS solution (the most common is Amazon EC2 <Elastic Compute Cloud> instances) and get to work. In cases like startups or pilot projects, Cloud Solutions are actually a boon. What are the big companies doing about cloud? Oracle, IBM and Microsoft? Disclaimer: The following thoughts are my own analysis and most of it is from content of Jason Bloomberg's conference. They do not reflect the views or opinions of my company whatsover. Oracle has launched a suite of products around IaaS, PaaS and SaaS. Oracle offers Sun hardware for their IaaS solutions and are essentially hosting Oracle middleware on the cloud. They call their SaaS solutions as Oracle Fusion Apps. IBM's cloud strategy is not aimed at the end-consumers. Their target is the big enterprises and large telecom providers who actually provide enterprises with the infrastructure to host their cloud solutions. So these are more like the ISPs (Internet Service Providers). IBM's solutions are mainly around PaaS. Microsoft basically brand their cloud solutions under the Windows Azure tagname. Windows Azure Platform and Windows Azure Platform Appliance is their PaaS offering whereas their major SaaS offering are the Office 360 apps. So how do I get out of this mess? To use or not to use the Cloud? The answer to this and the enormous Cloud solutions is two words - Architecture and Governance. It is essential to identify the business problems the Cloud best addresses, and to see where the Cloud fits into the overall IT strategy of the enterprise. What are the pros and cons of Cloud versus any other alternatives, and how the Cloud fits in the overall governance framework. The important thing to remember is not to grab any solution just because your favourite vendor has launched it, but you analyse objectively if your enterprise really fits into that readmade suit. What can be a sample Cloud Computing roadmap? Disclaimer: Directly from the docs A simple Cloud Computing Roadmap can be enumerated as follows: - Culture/support assesment - Are you an early adopter? - Define Goals- Financial, Operational, Competitive, Service levels - Quantify Benefits - OpEx v/s CapEx, Performace Targets, Top-line benefits - Define role of Cloud for business & IT - Mitigate risk - Governance, Security - Choose Cloud models - public, private, hybrid - Create migration plans and milestones. What can be a phased strategy for migrating to the cloud? This article elaborates the six main phases for Cloud Migration for Amazon Web Services (AWS) Phase 1. Cloud Assesment Phase Phase 2: Proof of Concept Phase Phase 3: Data Migration Phase Phase 4: Application Migration Phase Phase 5: Leverage the Cloud Phase Phase 6: Optimization Phase I am a product developer? Do I have to redesign by products for the cloud? The essential thing to understand is that unless the applications are re-architected to take advantage of Cloud benefits like Elasticity and Fault Tolerance, there is little sense in using a Cloud Solution at all. As the phased strategy to Cloud migration suggests, it is very important to take incremental steps to architect your solutions for the Cloud. When you design with the aim of leveraging Elasticity and Fault Tolerance benefits of the cloud, you will end up with a better architected app. You don't know aheas of time how many Cloud instances your app will be running on, as such is makes perfect sense to spend a little time initially and design your app FOR the Cloud. Can I ensure my ACID transactions in the Cloud? We have grown up reading about databases and the magic word - ACID - Atomic Consistent Isolated Durable and as such we believe that all database transaction should necessarily be ACID for several reasons. However, with the advent of transactions in Cloud, it is no longer possible to have immediate consistency of data at all instances. What Cloud assures is Eventual Consistency - i.e. - Data will be consistent after a set amount of time passes since an update. ACID is gradually giving way to BASE in the Cloud Context. Basic Availability - Cloud supports parial failures without leading to a total system failure . (Cloud environments are inherently partition tolerant) Soft State - Any change in state must be maintained through periodic refreshment. Eventual Consistency - Its okay to stale some data some of the time The BASE requirement for transactions in Cloud also suggests that companies where real time data and accuracy is of prime importance, Cloud might not be such a good solution. A clear example of where Cloud cannot be a good medicine for all ills. Examples may include, real time inventory management for product availability and banks. (Banks may not want to adopt cloud for reasons other than BASE - security and government regulations may be major challenges). The main takeaway for this module would be that the adoption of cloud depends on what is your unique problem. For Cloud, one size does not fit all. An enterprise needs to carefully weight its app's requirement for scalabilty and elasticity and then decide which Cloud deployment option is right for them.
OPCFW_CODE
Twilio creates developer platform for T-Mobile NB-IoT network October 25, 2018 Twilio has created a developer platform for the T-Mobile NB-IoT network, the California-based cloud communications company announced last week at Signal, its customer and developer conference in San Francisco. Twilio Narrowband is said to be the first developer platform for NB-IoT in the USA. The cellular low-power wide-area network technology should reduce prices and increase battery life for intermittent low-bandwidth connections. Twilio also announced the Twilio Breakout software development kit (SDK) to help developers start taking advantage of the optimisations offered by NB-IoT networks. NB-IoT was designed for the majority of IoT devices that don’t need a lot of bandwidth. With NB-IoT, devices can consume a fraction of the battery power they do with previous cellular M2M devices, enabling connectivity at a fraction of the cost. NB-IoT is built for smaller data packets, such as timestamps, GPS coordinates and status updates for a variety of industries, from smart metering to health device monitoring. With the power and cost efficiencies generated by NB-IoT, the market is ripe to open for new categories of lower cost, battery efficient internet-connected devices that don’t exist today. T-Mobile is the first to deploy a NB-IoT network in the USA, and it launched nationwide in July. “Together with Twilio, the un-carrier is unleashing developers and building an entirely new ecosystem for IoT solutions,” said John Legere, chief executive officer of T-Mobile. “I can’t wait to see what awesomeness they create. Once again, T-Mobile is at the forefront of innovation, enabling a world where anything and everything can be connected. The possibilities are endless.” The developer platform is comprised of three components: - Narrowband SIMs: Since the introduction of Twilio Programmable Wireless two years ago, Twilio has been focused on getting developers started quickly with cellular connectivity, instant self-service on-boarding, no contracts required and two-day shipping in the USA. - Narrowband IoT developer kit: A limited supply developer kit including an Arduino-based development board and Grove sensors specifically chosen for low-powered wide-area products. It also features the U-Blox LTE Cat NB1 Sara-N410 hardware module, certified for the T-Mobile narrowband network. - Breakout SDK: Twilio Breakout SDK reduces the complexity of hardware and heterogeneity of different networks, allowing developers to focus on creating their NB-IoT device deployment. The SDK handles tasks such as network registration and intelligently optimises communications between devices and cloud services based on the network capability requirements across IP, non-IP and SMS. The two companies first teamed up in 2016 with the launch of developer tools for cellular IoT, opening up the opportunity for IoT developers to build in cellular connectivity for the first time. “The introduction of T-Mobile’s narrowband IoT network provides a tremendous opportunity for developers who are innovating and building new categories of devices that don’t exist today,” said Chetan Chaudhary, general manager and vice president of IoT at Twilio. “By applying Twilio’s proven approach for cellular IoT connectivity to narrowband, it will remove barriers so developers can focus on building devices and dreaming up new use cases that don’t yet exist. We can’t wait to see what you build.” Also at Signal, Twilio introduced the Super SIM, built on its mobile core infrastructure, and an expanded set of tier-one carrier relationships with Singtel, Telefonica and Three Group, which power Twilio Programmable Wireless. With the Super SIM, developers can use a single API to deploy IoT devices globally with the confidence that Twilio can optimise network performance on tier-one carriers based on the location in which the device is deployed.
OPCFW_CODE
A few weeks ago, I noticed Chrome Canary 64-bit would either refuse to launch (a window) or would launch as Not Responding in Windows. When I opened Task Manager to kill Canary, though, I saw this: Canary had 32-bit processes, despite the 64-bit installation. At first I figured I’d either absent mindedly installed the 32-bit build or forgotten to switch over to the 64-bit build when it first became available, so I reinstalled the 64-bit build in-place. That fixed the problem for a few days until it returned. At this point I knew for sure the error wasn’t my doing, so I filed a bug. Turns out Google deliberately pushes 32-bit builds to 64-bit installs on high RAM – apparently ≥ 8GB, from my experience – machines to find memory-related bugs. From 2 developers in that bug thread: On occasion, we send a 32-bit build to 64-bit installs. The most frequent case is an ASAN build to help us find memory-related bugs in Chrome. It’s most likely that you received one of these build (chrome://version will tell you), and that you’d be back to 64-bit canary the next day. Some background: We’ve recently started shipping ASAN instrumented builds of Chrome to canary users. These builds contain instrumentation that track down heap memory errors, and provide extremely useful bug reports to Chrome developers. As grt@ mentions, if you’re on one of these builds you’ll see a ‘SyzyASan’ label if you navigate to chrome://version. – To limit user pain we filter for machines with sufficient memory, as there is a significant memory overhead to the instrumentation. – To further limit user pain we randomly select 1 in every 20 users every day. That is, for any day’s update you have a 1 in 20 chance of receiving an ASAN instrumented build. The next days update still has the same 1 in 20 chance, so most often you should end up back on a non-instrumented build. – Unfortunately, the technology is 32-bit only right now. In order to increase the audience of potential users we also ship to 64-bit users, intentionally downgrading them to 32-bit builds for a day. We are working on making the instrumentation work natively in 64-bit mode, but that is at least 6 months away. Finally, if the instrumentation is rendering your browser completely unusable we do have an opt-out mechanism in place. I’ll let you decide the morality of pushing broken 32-bit builds to 64-bit users, but at least we have an answer. Currently the only fix I know of is an in-place reinstall.
OPCFW_CODE
Time is running out to enjoy a special Early Bird pricing and attend this year’s TechDays. With a fantastic speaker line-up, this is a not-to-miss event. Early Bird Pricing will end on February 28th, so don’t miss out on this unique opportunity and save up to €125! TechDays Main Conference takes place on April 26th and 27th. During the main conference, 6 Developer and IT-Pro tracks bring you a mix of new technology and in-depth content on current technology with over 60 sessions planned. Have a look at the session listing and you will discover which sessions have been confirmed so far. You will also notice that Microsoft has invited some top notch speakerswho are eager to share their expertise. On the third day of TechDays, April 28th, we host a Deep Dive day with 4 different tracks for developers and IT-Pros. As a developer you can choose between two tracks that go in-depth and focus on Best Practices. Take a look at the constantly updated session listing to filter and browse through confirmed speakers and sessions. Next to learning there is also the networking aspect of this conference. You’ll have you the opportunity to connect with Product Managers from Redmond, meet your peers, talk to our user groups and much more. Hope to see you at the TechDays! One of the goals of the FIM product teams is to enable the FIM community to contribute to our troubleshooting documentation set. To do so, Microsoft has started to migrate the core troubleshooting content into the TechNet Wiki platform. They will still track the available articles in the Troubleshooting FIM 2010 Roadmap on the FIM TechCenter. To simplify the process of writing and posting FIM troubleshooting articles, Markus has posted a TechNet Wiki article that includes some guidelines. Please feel free to add on to this article if you can think of additional tips and tricks. Feedback is always welcome! Please feel free to spread the word as appropriate. Microsoft Belgium has developed a .NET Encryption Library for the Belgian eHealth platform. This .NET Encryption Library allows to develop software solutions that use the eHealth platform services, i.c. the end-to-end encryption. In this way you can send sensitive messages over an non protected carrier. It not only protects against unauthorized access, but also identifies the sender. The .NET Encryption Library currently is availble as Open Source on Codeplex. To get an insight on this .NET Encrypion libary, in february 2011 Microsoft organizes some workmeetings. eHealth is the Belgium Federal Government platform with the goal to bring all healtcare platforms in Belgium together. For more information see https://www.ehealth.fgov.be. After the huge success last year and the new year freshly started, there is some early news on Community Day. Community Day 2011 will take place on June 23th 2011. Location will be Utopolis, Mechelen again. This edition will be a bit special since we celebrate the 5th edition! Already a few sponsors confirmed, but to make it a success we still need additional support. If your company is interested in sponsoring, please let me know. More news to come, but block it in your agenda already!
OPCFW_CODE
MARZHX Computer Software timer - Great for coin operated computer stations. - Works with any brand of coin slot/selector. - Uses serial port interface for the coin slot/selector. - Still works with motherboard with no serial port. Just use USB to RS232 converter or use an RS232 PCIE card. - Performs proper Windows shut-down once shut-down time is up. This not only shuts-down the system unit but also the computer peripherals. This greatly saves on electrical cost, contrary to the conventional digital timer. PRODUCT FEATURES: 1. Auto-shutdown mechanism if time is up, thus saving on electricity cost. Generates a sound once time is close to zero. Works perfectly with standard electronic coin-slot. Time rate can be easily adjusted by the owner with security password. Build in Windows Locking mechanism. Project: usb to coinslot with. And also have a software that would recognize that the coinslot is inserted with coin, so the software. When the software timer. Secure locking mechanism includes: • Access lock to all dives & folders. • Hiding Drives. • Access lock to control panel and add/remove programs. Tmpgenc 4.0 Xpress 18.104.22.1681 Download here. • Disables right click to desktop and taskbar. • Remove all items from the start menu except for the Programs menu. • Access lock to display settings. • Access lock to Task Manager. Load X Men Origins Wolverine Pc Game Crack Free on this page. • Task manager or other utility programs can not end task or end process the software timer. • Once locked, unauthorized persons can't bypass the timer. Even if one goes to different options when F8 is pressed during loading of windows, such as safe mode, normal mode or directory restore service. • Disables CTRL, ALT, DEL 6. Time is saved on a database. In case of power interruption, remaining time will be saved and will be resumed once power is back. No need to do refund. Just wait until power is back. Automatic inventory count - Software includes a coin counter feature and you can track all income reports from time to time. It is also saved on its own database. Auto-send Income Report to Owner's Email Add - This is the latest feature. Timer now has a capability to automatically send income report to the Owner's Email Address. To do it, simply register the owner's email add on the control panel of the Timer and set the scheduled date and time when the timer will automatically send the income report through email. FOR Demo and inquires, please contact Engr. Jonathan Oracoy at 09. You can download the latest installer of our software timer here.
OPCFW_CODE
The relational database approach where all data is normalized and business logic runs in stored procedures creates maintainability problems that create unlimited risk and cost over time. Every year of continued database operation puts the product and organization further behind. These days, we have open source frameworks and libraries, object-relational mapping libraries, and microservices. These approaches enable much better data management with newer patterns. What needs do databases fulfill? Zooming out for a second, what are these huge relational databases doing that’s so important? |Need||Old Way||New Way| |Data used in the interface||Web applications use SQL queries to insert data that users need||Web application uses a library that maps objects to persistent storage| |Temporary data storage for sessions||Create a table and insert new records and update old records or remove them||Use a memory cache like Redis| |Events and streams of data||Create a table, insert and delete rows||Message queue with high performance and delivery options| |Business logic, permissions||Stored procedures that perform various actions||Keep that out of the DB, just store and retrieve data| |Database servers||few, multi-tenant, centrally managed||as many as are needed, keep data local to the apps| |Analytics and Data Science||Reports and temp tables and stored procedures||ETL to a data lake along with all the other DBs| Approaches to move from old to new - Start to do an ETL out of the big DB for reporting - Applications with business logic in the database should be replaced with a thin rest API - Applications that use DB table for event queues should have the functionality moved to a message queue - Applications that make use of ephemeral data tables or contents should move that content to memory cache - Applications that persist data from a database into the databaseshould have the data for that app moved to a local database, either postgres or mongo depending on the type - After these steps have been taken, the amount of reliance on the central database will be reduced to the point that the last few hold-outs can be addressed individually Keep in mind that by using an old RDBMS, all related software is crippled and unable to adapt to the changing needs out in the world. The coordination cost that comes from having a centralized database with a schema under strict change control is massive. Teams will notice their velocity and quality increase as they reduce their reliance on the database. These teams can suddenly take advantage of CI pipelines that are entirely ephemeral. Security testing can be much more aggressive because the blast radius of an SQL injection is limited to the environment linked to this single review application. Nobody else will be impacted even if the whole database is brought down. The user experience will improve for the ultimate end-users as well, since the types of persistent data are stored in the most fitting format and operated on as such. A message queue can scale massively when queue depth increases suddenly and shrink back down. A memory cache can load balance across more nodes as usage increases. The database won’t have expensive queries running synchronously unless it’s actually important. In this case, the way the database has evolved and what applications depend on which capabilities will determine what steps to take. The best step is to decommission as much of the old software as possible, but I’ve personally spent years trying to get 90s software turned off and somehow never had the budget for resources to fill those last few gaps. All else being equal, starting with message queues and memory cache is likely the biggest architectural improvement that can be made. Alternatively, starting with the ETL process and setting up a proper data lake would shift a lot of the reporting load without impacting the inputs and applications directly. It creates a common destination for what happens when the message queue or event stream or other data starts flowing and being processed. If it has a place in the data lake, what is the resolution and scope necessary to support existing reports? Clear input, output, and usage targets helps with the shift from DB to Queue.
OPCFW_CODE
Overview of CENM error codes Corda Enterprise Network Manager can report a number of error codes. The tables below provide details about such error codes for errors related to configuration parsing and validation. - The CENM error codes are listed in the CENM error codes table below. - The RPC Admin API error codes are listed in the RPC Admin API error codes table below. For each error code in the tables, there is additional information about its aliases, the reason why that error occurred, and (for the main CENM errors) instructions on what actions you can take to address the problem reported by the error. - Error code: the error code as reported by the Corda node. - Description: a description of what has gone wrong. - Actions to fix: what actions to take in order to address the problem (only available for the main CENM error codes in the first table below). To make use of this table, search the console or node logs for lines indicating an error has occurred. Errors that have corresponding codes will contain a message with the error code and a link pointing to this page. CENM error codes table |Error code||Description||Actions to fix| |This error indicates that there were both parsing and validating issues with the config.||Check the additional details and double check that your configs are lining up with the most recent CENM documentation.| |This error indicates that an error has occurred during configuration parsing.||Check the additional details and double check that your configs are lining up with the most recent CENM documentation.| |This error indicates that the configuration file that was provided does not exist.||Check that the provided path is correct and that the file is actually there.| |This error indicates that the configuration file could not be read.||Make sure you have the rights to read the configuration file.| |This error indicates that there were validating issues with the configuration||Check the additional details and double check that your configurations are lined up with the most recent CENM documentation.| |This error indicates that a substitution did not resolve to anything.||Check if the entries in your configuration file are specified correctly.| RPC Admin API error codes table These error codes are a bit different than the CENM error codes in the table above. They are attached to the RPC errors that are thrown when using the CENM services’ RPC Admin API. These error codes can be accessed by using the code property when encountering the error in the RPC client. |Used when the server throws a more general exception - for example, | |Used when the client’s request is being marshalled into a JSON. If there is a processing exception during that phase, this error type will be used.| |Used when the server’s response (body) cannot be un-marshalled into the given object. This can happen because of various reasons - see the attached cause in the | |Used when the server’s response (body) is missing, and the call requires a response.| |Decoding uploaded node info | |Parsing uploaded node info or Network Parameters to | |Writing node info to disk file failed.| |Provided Network Parameters file (argument provided by Angel Service when starting Network Map) doesn’t exist.| |Provided Network Parameters file (argument provided by Angel Service when starting Network Map) failed parsing or validation.| |New advertised Network Parameters update is same to existing Network Parameters.| |No parameters update found for given Network Parameters’s hash or there are no scheduled updates at all.| |Update deadline specified in advertised parameters update hasn’t passed yet.| |Advertised parameters update hasn’t been signed yet.| |Provided legal name | |Provided revocation reason is not in the list of supported reasons (| |Certificate to be revoked doesn’t exist.| |Signing process used for fetching or signing provided data type (| |There was an error during actual material signing.| |The function used in the request cannot be recognised.| |The message type used in the request cannot be recognised.| |The content used in the request cannot be recognised.| |Used when the user token is invalid.| |Used when a call is made that the user does not have authorisation for.| |Sub-zone with provided | |Provided configuration to be uploaded failed parsing.| |Provided configuration to be uploaded failed user validation rules (non-runtime dependent ones).| |Provided label data to be uploaded failed validation (text is empty or colour is not formatted as RGB hex code).| Corda Enterprise error codes For a list of node error codes in Corda Enterprise, see Node error codes.
OPCFW_CODE
Changelog for version 5.4.0# Released 29 September 2023# Major New Features# Added acceleration support for evalwhen using ax. Added filter bar to the CBAC Tag Access Summary table. Added the ability to edit permissions on a Kit after it has been installed. Added the ability to share a Kit with multiple groups after it has been installed. Added Alert Context debugging to Flows to aid development of consumer flows. Added an optional Timeframe Offset for Scheduled Searches. Added wordcloud settings to allow user to adjust font size when rendering words with very different magnitudes. Added Systems & Health pages as options for Home Page preferences. Added the ability for JS and Go flow nodes to halt execution. Added support for setting output name in JSON Encode flow node. Added hinting for macros and Added other type of C-style comments: Added ingest time EV attach in global config. Improved support for overlapping wildcard tags on well definitions. Improved performance in the ipmodule now supports multiple filters when using the same EV. Web UI Changes# Allowed gif uploads for Playbook covers. Disabled Kit Download when building a kit form is incomplete. Fixed an issue where tooltips were unreadable in light theme. Fixed an issue where Query Studio tab data could leak across user accounts. Fixed an issue where Playbook was still saved even if autosave was disabled. Fixed an issue where an admin user was unable to see other users in the filters for Flows. Fixed an issue where an admin user was unable to see flows belonging to other users. Fixed an issue where a Scheduled Search sometimes could not be saved if only the timeframe was edited. Fixed an issue where Actionables run query action was expanded before executing the query. Fixed an issue where Last Run time was not updated for disabled automations (Flows, Scheduled Searches, Scripts). Fixed an issue where deleting an uploaded Extractor would not reactively remove it from the list - a refresh was required. Fixed an issue where ingester list page would not show calendar data for all wells. Fixed an issue where topology view would send many preferences requests. Improved consistency of time format across charts. Improved performance in Query Studio when there are multiple tabs open and extensive search history. Improved performance loading Flows when there are many in the system. Improved performance loading Extractors when there are many in the system. Required Scheduled Search timeframe to be greater than 0s. Required Search capability to be a dependency for ScheduleWrite capability. Added support for multifiltering hints on ax derived extractions. Added multifiltering acceleration support to the Added strict flag to CEF and fixed malformed data edge case. ipexistmodule to drop all unsupported ipv6 matches. Changed JSON Encode flow node to output a string instead of a byte array. Fixed an edge case in the webserver that could cause a crash. Fixed behavior with duplicate search library entries from Kits. Fixed an issue with handling the BOM character in fulltext and syslog. Fixed an issue with EV timestamp helper and evalduration arithmetic that caused problems casting time()on a slice. Fixed an issue with error diagnostic on reserved word in Fixed an issue with completions and diagnostics when comments are present. Fixed an issue where Get Table Results and Get Text Results would allow negative counts. Fixed an issue where ipfixmodule was not handling mac addresses correctly with data exploration. Fixed an issue where timemodule wouldn’t work when using the -oformatflag when combined with a string input. Fixed various edge cases with Improved overview chart tracking of certain queries. Improved behavior when dealing with large attachments in Email flow node. Improved handling of webserver startup when datastore is unreachable. Improved handling of loadbalancer startup when datastore is unreachable. Improved performance for Ingest flow node. Fixed an issue that prevented an ingester from exiting if it got into a bad state with both disk and cache being full.
OPCFW_CODE
How do you protect ultrawide angle lenses with a bulbous front element? I'm talking about lenses like Canon's 11-24mm f/4L, 14mm F2.8 (both I and II versions), and Nikon's 13mm f/5.6--take a look at their front elements and how they protrude out of the main lens barrel. Just wondering, aside from the lens cap, how does one go about protecting them, especially when in the field, and when travelling? To protect the front element of my Nikkor 14-24 with protruding front element while actually shooting, I took a high grade sheet of clear plexiglass and dremmeled a disk out of it roughly 1.5" wider in diameter than the front of the lens, took 1/4" strip gaff tape and covered the rough edge of the plexi-disc all the way around, then centered it over the end of the lens and gaff taped it on. Looks kinda ridiculous, but works like a charm. Only caveats: make sure the lens and homemade filter are completely dust- and hair-free before you tape it on because static makes it hover in the middle and you can't even shake it to one side. Also, it pretty much negates having a lens hood, so lens flare can be a challenge, although it's caused some groovy flare effects that I could work with. It's a temp filter - usually gets trashed because I shoot in the mud a lot and it scuffs up after a while. Hope that helps. Picture of the contraption: Love DIY solutions, especially in photography where even the simplest parts are expensive. It will vary from lens to lens, but most manufacturers will provide a lens cap or cover to protect the front element. In the case of the EF 11-24mm f/4 L the lens cap attaches to the integrated, non-removable, lens hood. This provides protection from dust, scratches, and prevents liquid from splashing onto the front of the lens. As with any other lens, for more protection than a lens cap offers one must consider a well padded protective case. The Tamrac MX5341 M.A.S. Pro 50 Lens Case, shown here with an EF 16-35mm f/2.8 L II, has been reported (see review by hotdog321 posted 5/19/2015) to also fit the EF 11-24mm f/4 L. I shoot with fisheye lenses a lot. There really isn't much additional precaution you can take that you wouldn't do for any other lens, minus UV/clear filters. You use a padded bag or lens case, and you use the lens cap. The lens cap is going to be your main point of protection, so you may have to change your habit of removing the cap and stowing it away to having the cap with you, and then only uncapping the lens while you use it, if you're in a situation that could harm the lens. You become situationally aware. You keep the lens cap at the ready, and you become very sensitive to how close a working distance you can use with your fisheye/ultrawide lens. Because objects may be closer than they appear through the viewfinder, you will almost inevitably early on with the lens while you're learning to use it, end up bumping into a subject with the lens. This is what tends to teach you how to be situationally aware. :)
STACK_EXCHANGE
Sat 16th Apr 2011, 19:55 $1 per sign up from USA or Uk- Minimum payout $1 via paypal You can earn $1 for each sign up from usa or uk .Any one can participate but inorder to earn you should refer people from USA or UK. I have tried this and have earned some amount (not much as i haven't referred many) You don't require any credit cards for this.Its free for all. Here is the link . I can display payment proofs too. If you find this post useful can i get some thanks lol (kidding) I will give 0.05$ for every person from USA or UK for joining this site using my link.(remember only USA and UK) Here is the payment proof. Mon 18th Apr 2011, 16:34 Currently,I have some issues with my credit card.I will send payment once it starts working. Fri 24th Feb 2012, 06:33 Update: Unable to send any money to person who registers but surely this program is beneficial for usa and uk people Fri 24th Feb 2012, 07:23 I have no idea what this site is about based on your description. Might want to start by explaining what it is before you start passing around affiliate links. Fri 24th Feb 2012, 09:57 Its a site where you can get rewards or points by completing free offers,playing games even you get points for purchasing from sites like amazon etc. later you can redeem points for cash or by something. If you are not interested in offers just refer friends and earn $1 per referral. Originally Posted by ThirdSEO Check this post:http://pcpedia.blogspot.in/2011/03/l...m-payment.html Please register from my Link at least it would be a reward for me for spreading some good info: Register Fact: To earn some real cash you need to refer people from usa or uk Sun 4th May 2014, 13:04 I will pay trough paypal.com or skrill.com for every register on my website. $0.50 .This is first payment 4 register. You'll be paid monthly after that and you'll get prizes . you'll have a salary extra time . 100% free . please contact me first on: donparousa AT gmail DOT com available only countries below Tue 3rd Jun 2014, 12:42 Here is some of my latest withdrawls without much effort. Join , refer and earn - Cashle Sat 25th Oct 2014, 13:57 How many of you have tried cashle? Its awesome especially for people from USA, UK and Canada. I can see people purchasing PS 4 and other cool stuff via points2shop for free. Its legit and you should try it. I'm not from USA hence, I don't have much opportunities. Still, its my passive income.
OPCFW_CODE
import binascii import re # this mangled "shorthand" output basically throws out exact timings while # keeping the relative length of each time period. Good for comparing # received codes against known values def mangleIR(data, ignore_errors=False): """Mangle a raw Kira data packet into shorthand""" try: # Packet mangling algorithm inspired by Rex Becket's kirarx vera plugin # Determine a median value for the timing packets and categorize each # timing as longer or shorter than that. This will always work for signals # that use pulse width modulation (since varying by long-short is basically # the definition of what PWM is). By lucky coincidence this also works with # the RC-5/RC-6 encodings used by Phillips (manchester encoding) # because time variations of opposite-phase/same-phase are either N or 2*N if isinstance(data, bytes): data = data.decode('ascii') data = data.strip() times = [int(x, 16) for x in data.split()[2:]] minTime = min(times[2:-1]) maxTime = max(times[2:-1]) margin = (maxTime - minTime) / 2 + minTime return ''.join([(x < margin and 'S' or 'L') for x in times]) except: # Probably a mangled packet. if not ignore_errors: raise # pronto codes provide basically the same information as a Kira code # just in a slightly different form. (Kira represents timings in uS # while pronto uses multiples of the base clock cycle.) # Thus they can be used for transmission def pronto2kira(data): """Convert a pronto code to a discrete (single button press) Kira code""" octets = [int(x, 16) for x in data.split()] preamble = octets[:4] convert = lambda x: 1000.0 / (x * 0.241246) freq = convert(preamble[1]) period = 1000000.0 / (freq * 1000.0) dataLen = preamble[2] res = "K %02X%02X " %(freq, dataLen) res += " ".join(["%0.4X" % min(0x2000, (period * x)) for x in octets[4: 4+(2*dataLen)]]) return res def mangleNec(code, freq=40): """Convert NEC code to shorthand notation""" # base time is 550 microseconds # unit of burst time # lead in pattern: 214d 10b3 # "1" burst pattern: 0226 0960 # "0" burst pattern: 0226 0258 # lead out pattern: 0226 2000 # there's large disagreement between devices as to a common preamble # or the "long" off period for the representation of a binary 1 # thus we can't construct a code suitable for transmission # without more information--but it's good enough for creating # a shorthand representaiton for use with recv timings = [] for octet in binascii.unhexlify(code.replace(" ", "")): burst = lambda x: x and "0226 06AD" or "0226 0258" for bit in reversed("%08d" % int(bin(ord(octet))[2:])): bit = int(bit) timings.append(burst(bit)) return mangleIR("K %0X22 214d 10b3 " % freq + " ".join(timings) + " 0226 2000") def inferCodeType(data): # a series of L/S chars if re.match('^[LS]+$', data): return 'shorthand' # "K " followed by groups of 4 hex chars if re.match("^K ([0-9a-fA-F]{4} )*[0-9a-fA-F]{4}$", data): return 'kira' # 2 groups of 4 hex chars if re.match("^[0-9a-fA-F]{4} [0-9a-fA-F]{4}$", data): return 'nec' # multiple groups of 4 hex chars if re.match("^([0-9a-fA-F]{4} )*[0-9a-fA-F]{4}$", data): return 'pronto' # convert code into a raw kira code that can be transmitted def code2kira(code, codeType=None): if codeType is None: codeType = inferCodeType(code) if codeType == "kira": return code.strip() if codeType == "pronto": return pronto2kira(code) # convert code to a form ready for comparison def mangleCode(code, codeType=None): if codeType is None: codeType = inferCodeType(code) if codeType == "shorthand": return code.strip() if codeType == "kira": return mangleIR(code) if codeType == "pronto": return mangleIR(pronto2kira(code)) if codeType == "nec": return mangleNec(code)
STACK_EDU
Locate Python Developers Get usage of competent freelancers inside of seconds. It really is swift, effortless, and we only require a handful of particulars to get rolling. Want to operate as a substitute? A list of alterations in R releases is preserved in different "news" files at CRAN. Some highlights are stated underneath for several big releases. Release Day Description This concern is ambiguous, imprecise, incomplete, overly broad, or rhetorical and cannot be fairly answered in its existing kind. For help clarifying this concern in order that it can be reopened, pay a visit to the help Middle. If this concern can be reworded to fit the rules inside the help Heart, make sure you edit the dilemma. Map the characteristic rank into the index in the column name from the header row about the DataFrame or whathaveyou. Try out your hand at computer programming with Resourceful Coding! Learn the way you can get usage of many topic-unique coding projects. What I understand is the fact that in attribute variety approaches, the label information and facts is usually employed for guiding the look for a superb element subset, but in a single-course classification difficulties, all coaching data belong to just one class. For that motive, I used to be in search of attribute range implementations for a single-course classification. yr project’. Then I came across Mr. Avinash as a result of among my pals. So I contacted him and discussed the projects and despatched try this him the attachments. His team analyzed the Probably, there is no 1 best set of options for your dilemma. There are lots of with varying ability/functionality. Look for a established or ensemble of sets that actually works ideal for your preferences. Swiftly developing Adult men’s grooming household supply assistance, Greenback Shave Club, trusts CircleCI to deliver top quality final results for his or her World-wide-web and cell check and deployment procedures. In order to supply a successful project, we focus on a variety of difficulties that out of your blue. If you're difficulties with the Python project, Speak to us. We provide quick Python Project Help from any where. I attempted Function Relevance technique, but each of the values of variables are previously mentioned 0.05, so does it signify that every one the variables have very little relation While using the predicted price? Really I was unable to grasp the output of chi^2 for feature variety. The situation continues to be solved now. You may begin to see the scores for every attribute and also the four attributes preferred (All those with the very best scores): plas This chapter is quite wide and you would probably take advantage of examining the chapter within the e-book Besides seeing the lectures to help all of it sink in. You should return and re-look at these lectures Once you have funished a number of far more chapters.
OPCFW_CODE
I'm currently updating Wwise to 2019.2.1 and having difficulty rebuilding a plugin I wrote a couple of months ago. There are no changes to the plugin code which is thoroughly tested. It builds without any problems for Windows. I believe the Android build (using build tools) is failing due to the spaces in the Wwise SDK path. When I run the build script I get the following output: C:\Dev\Tools\HornetFilter>python "%WWISEROOT%/Scripts/Build/Plugins/wp.py" build Android -c debug Building HornetFilter for Android in debug... Building Android in debug using ndk-build. Build Command: C:\NVPACK\android-ndk-r17c\ndk-build all -j 8 NDK_PROJECT_PATH=.\ PM5_CONFIG=debug_android_armeabi-v7a NDK_APPLICATION_MK=HornetFilter_Android_application.mk NDK_LIBS_OUT=C:/Program Files (x86)/Audiokinetic/Wwise 2019.2.1.7250/SDK/Android_armeabi-v7a/debug/libs NDK_OUT=C:/Program Files (x86)/Audiokinetic/Wwise 2019.2.1.7250/SDK/Android_armeabi-v7a/debug/lib NDK_APP_OUT=C:/Program Files (x86)/Audiokinetic/Wwise 2019.2.1.7250/SDK/Android_armeabi-v7a TARGET_OUT=C:/Program Files (x86)/Audiokinetic/Wwise 2019.2.1.7250/SDK/Android_armeabi-v7a/debug/lib Android NDK: WARNING: APP_STL gnustl_static is deprecated and will be removed in the next release. Please switch to either c++_static or c++_shared. See https://developer.android.com/ndk/guides/cpp-support.html for more information. Android NDK: WARNING: Deprecated NDK_TOOLCHAIN_VERSION value: 4.9. GCC is no longer supported and will be removed in the next release. See https://android.googlesource.com/platform/ndk/+/master/docs/ClangMigration.md make: *** No rule to make target `Files'. Stop. make: *** Waiting for unfinished jobs.... [armeabi-v7a] Install : libHornetFilter.so => C:/Program/armeabi-v7a/libHornetFilter.so The error "No rule to make target 'Files'" appears to be referring to part of "C:/Program Files (x86)/...." following the first space. The final line shows the built lib being copied to "C:/Program/armeabi...." which also looks like a mangling of the Wwise path. The script has indeed created a folder C:\Program and copied files to it. I have looked in the make files for a definition of the output paths but there isn't one so it must be defined somewhere in the scripts. How can I fix this?
OPCFW_CODE
Why is the root element of MainWindow Window instead of MainWindow? I notice that the root element in any XAML file (in WPF) seems to be one of: Window Page UserControl ResourceDictionary Application I tried to change the root element to local:MainWindow, but then the project cannot compile, saying the base class of a partial class should be the same. Then I guess the root element is the base class of the actual class? What is the reason for it? Since the root element cannot be changed to the actual class, I cannot access the dependency properties written in MainWindow.xaml.cs. How can those DPs be referenced in XAML? Besides, I also notice that some third-party themes also provide special window classes, and in that case, the root element is often changed. How is this being achieved? E.g. GlowWindow from HandyControl You can derive from MainWindow -- as the error says, you need to change the base class both in your .xaml file and in the .xaml.cs file. That said, it would be a very unusual thing to do. The MainWindow is the default name given to the window that's shown when your application starts. Why would you want to derive from it? MainWindow is a Window. What's your actual issue? I tried to change the root element to local:MainWindow, and then the project cannot compile, saying the base class of a partial class should be the same. Then I guess the root element is the base class of the actual class? Yes, this is correct, the root element is the base class that can, but does not necessarily have to be specified in code-behind. The connection between the partial classes (compiled from XAML markup and the code-behind file) is specified using the x:Class attribute in XAML, see Code-Behind and XAML in WPF. The XAML language includes language-level features that make it possible to associate code files with markup files, from the markup file side. Specifically, the XAML language defines the language features x:Class Directive, x:Subclass Directive, and x:ClassModifier Directive. [...] The partial class must derive from the type that backs the root element. What is the reason for it? A code-behind file is not mandatory if there is no custom code, see x:Class. In existing programming models that use x:Class, x:Class is optional in the sense that it is entirely valid to have a XAML page that has no code-behind. However, that capability interacts with the build actions as implemented by frameworks that use XAML. Even if there is one, the base class can be omitted, but then again the base class must be determined somehow and that is done through the root element type, see Code-behind, Event Handler, and Partial Class Requirements in WPF. Note that under the default behavior of the markup compile build actions, you can leave the derivation blank in the partial class definition on the code-behind side. The compiled result will assume the page root's backing type to be the basis for the partial class, even if it not specified. How can those DPs be referenced in xaml? Simply use a Binding with a RelativeSource that specifies the MainWindow as AncestorType. {Binding YourDependencyProperty, RelativeSource={RelativeSource AncestorType={x:Type local:MainWindow}}} If you assign an x:Name to your window, you could alternatively use ElementName in the binding. <Window ... x:Name="MyMainWindow"> {Binding YourDependencyProperty, ElementName=MyMainWindow} Besides, I also notice that some third-party themes also provide special Window classes, and in that case, the root element is often changed. How is this being achieved? No it is not. For instance, the GlowWindow is exactly defined like your MainWindow with code-behind and x:Class to refer to it and Window as root element. What you see in the link is that a new derivative of the GlowWindow is created, just like you create MainWindow from Window, they just happen to use the same name GlowWindow, unfortunately. Notice the namespaces. Try it yourself and create a new window by specifying your MainWindow as root element. It is exactly the same scenario. I will supplement the answer from @thatguy. In WPF, it is customary to separate the logic part of the control (which is written in Sharpe) from the visual part (which is written in XAML in the theme template). For your example, creating a template is redundant. But it could be done like this: public class MainWindowBase : Window { public int SomeProperty { get { return (int)GetValue(SomePropertyProperty); } set { SetValue(SomePropertyProperty, value); } } public static readonly DependencyProperty SomePropertyProperty = DependencyProperty.Register("SomeProperty", typeof(int), typeof(MainWindowBase), new PropertyMetadata(0)); } public partial class MainWindow : MainWindowBase { public MainWindow() { InitializeComponent(); <local:MainWindowBase x:Class="****.MainWindow"
STACK_EXCHANGE
M: 2012 Salary Guide - Are You Making Enough? - pghimire http://email.rhi.com/PS!qT3gwgpekHAFBgIAAAAGCgFICggxMDQ5ODAxMQoKMTcyNTY1MDQxNAkAEKDCCgotNTYzNjA5Mzk5BQ==?BRANCH_DIV_EMAIL_ADDR=&COUNTRY_CODE=UNK&FIRST_NAME=&INDIV_ID=7430494&PEOPLE_NO=&TREATMENTCODE=000153683 R: pghimire The site has been slow to respond. They are probably getting overwhelmed. Here are the direct AWS PDF Links: \- Tech: [http://s3.amazonaws.com/DBM/M3/2011/Downloads/RHT_SalaryGuid...](http://s3.amazonaws.com/DBM/M3/2011/Downloads/RHT_SalaryGuide_2012.pdf) \- Cretive Group: [http://s3.amazonaws.com/DBM/M3/2011/Downloads/TCG_SalaryGuid...](http://s3.amazonaws.com/DBM/M3/2011/Downloads/TCG_SalaryGuide_2012.pdf) \- Legal: [http://s3.amazonaws.com/DBM/M3/2011/Downloads/RHL_SalaryGuid...](http://s3.amazonaws.com/DBM/M3/2011/Downloads/RHL_SalaryGuide_2012.pdf) \- Financial: [http://s3.amazonaws.com/DBM/M3/2011/Downloads/RHI_SalaryGuid...](http://s3.amazonaws.com/DBM/M3/2011/Downloads/RHI_SalaryGuide_2012.pdf) \- Office Team: [http://s3.amazonaws.com/DBM/M3/2011/Downloads/OT_SalaryGuide...](http://s3.amazonaws.com/DBM/M3/2011/Downloads/OT_SalaryGuide_2012.pdf)
HACKER_NEWS
Migrate your contents from Outlook 2011 to Apple Mail using “Olm Extractor Pro”, and get 100% conversion of all items, information, and meta-data from Outlook 2011. Both Apple Mail and Outlook 2011 have a huge user base. They are perhaps the two biggest email clients that most Mac users use. Considering that, it may be a little disappointing and surprising, there is no easy way to move the data from one to another. But anyone who has ever used this method knows how inaccurate, incomplete, and tedious it is. First of all, it doesn't apply to your contacts and calendar folder. Secondly, you have to drag and drop each email folder individually. Good luck if you have a huge database with countless number of folders. It doesn't seem feasible to move the contents effectively. And not to mention, the Mbox files generated this way show a huge number of data irregularities. Which simply means that the emails that are stored in Outlook 2011 are not exactly similar to the converted Mbox file. You can find much of the information lost or modified. I received an email yesterday asking for a better solution. That is the primary reason I am writing this post today. Quiet honestly, I've received such emails very frequently. And I kind of always put on hold writing this article. But luckily, today is the day. The email says - “I have many work related emails in my home Mac stored in Apple Mail. But my office uses Outlook 2011. I want to move some of the emails from Apple Mail to Outlook 2011 in office computer. I certainly can't ask IT guys to do this for me. So I have to find a way myself. I'd appreciate a lot if you can suggest me some way. A colleague of mine suggested me to drag and drop the email folders that I want to move into the Mac desktop from Outlook 2011. but it didn't work. The Mbox files that I generated didn't contain the original images from Outlook 2011. And some of the attachments were also lost. Please help.” First of all, I am not surprised at all at reading this message. This is very frequent that people tried to convert the Outlook 2011 data to Mbox files using drag and drop method. But very rarely does it work. I've seen much data loss, sometimes even the Outlook 2011 gets corrupted due to some reason. So I'd always suggest to stay away from it. But the good news is that there is a much better, more professional, and effective way of doing about this. There is a tool called “Olm Extractor Pro” and it is a complete solution for all your Outlook for Mac conversion needs. Converting Outlook 2011 to Apple Mail is just one of its function. You can convert Outlook 2011 to other formats, such as Eml, Rge, Thunderbird, Entourage, and Postbox. But I will restrict myself to the main topic of this article. “Olm Extractor Pro” is certainly the best way to move Outlook 2011 contents. The major factor contributing to the tool's premium quality is its ease of use and user-friendly nature. Second factor is its ability to convert the data thoroughly. You won't find any lost attachments, or broken images, or broken non-English characters, etc. Everything will remain exactly the same when you import the converted Mbox files to Apple Mail. If you want to try the tool first without buying, you can do that Here: http://www.uslsoftware.com/olm-to-mbox-converter-for-mac/ . This free version has the limitation of 10 conversion per folder, which is more than enough to give you the fair idea of its workings.
OPCFW_CODE
[Readability Issue] Using all vs. wildcards I have noticed that a lot of __init__.py files in the repo include # pylint: disable=wildcard-import from <...> import * # pylint: enable=wildcard-import I believe a more appropriate approach would be to use __all__. According to PEP 8 documentation: To better support introspection, modules should explicitly declare the names in their public API using the all attribute. Setting all to an empty list indicates that the module has no public API. @drpngx is in the process of sealing our public interface via __all__ and some other mechanisms. Actually, there is another problem, and I am not sure if I should open a new issue (in case the introduction of __all__ fixes it) -- it is a little more severe: A lot of files disable pylint messages, but don't enable them in the end -- this potentially might cause a missed warning if several files are imported, because # pylint disable is a global disable. Here are all the files that modify the pylint Here is an example file that doesn't revert the modifications Probably a separate issue for that one. If you want to try to help contribute fixes for those, that would be awesome. Yes, it's a long story :-) We are using all, but this has issues, too. So we're starting to roll out remove_undocumented. Here's how it works in a nutshell: __all__ only works one level down remove_undocumented actually deletes symbols Splitting between interface and implementation So, for instance, in tensorflow/python/__init_.py, we do from ...standard_ops import * from ...platform import app The __all__ in standard_ops works to filter out anything that comes using the *. But app doesn't, so you still have tf.app.print_function and a whole slew of other things. We're rolling out these changes. Stay tuned. @zafartahirov nice find, please send a PR fixing the missing ones Yeah, I would love to contribute to the fixes, but I have a paper due the end of the month -- will try to do it over the weekend, no promises though 😄 . If this issue is still open by Saturday, I will send a PR for the current issue as well as #5309 To be clear, I think we have a solution for the __all__, but feel free to send a PR for just one file if you have ideas and we can take it from there. I was trying to see how this issue might get resolved without hurting the maintainability and without breaking the package integrity, and nothing comes to mind -- sorry, have nothing to contribute in here. I think I have found the two solutions that are being "tried" in the current master, I am not sure which one you guys are inclined to use: make_all resolution in the tensorflow/python/__init__.py Personally I would just make sure the files that are not supposed to be imported start with _ to indicate the protection, and use glob.glob('./[a-zA-Z]*.py') to get all the public files. After small preprocessing the paths could be converted to the import strings. Protected _ and private __ files would be a special case, and would have to be imported/included manually. The problem with this approach it is non-standard and requires a lot of rewriting. What is the current standard at TF for in-package imports? Thanks for looking into this! We want to retain some of the ease and flexibility that we have, while disallowing accidental imports. Here's what we do: We document symbols that we should use, with @@foo We use remove_undocumented to delete all symbols from "selected" modules/packages We have a whitelist "Selected" modules/packages are those that are imported into the tf namespace, for instance: app, image, nn, etc., transitively on down. For instance, tf.app.flags. These packages have a choice of (first one chosen to be the predominant solution): Have no implementation, just from .. import *, or from .. import Foo. They typically have a counterpart, called _impl, for instance, gradients_impl.py Have implementation, but use underscore for all packages used solely for implementation as you suggest Here is an example module using the first variant, dubbed dual API/implementation: gradients.py: (API module) """Gradients doc @@some_func """ from .. import remove_undocumented from ..gradients_impl import * remove_undocumented(__name__, ["some_other_func"]) gradients_impl.py: (implementation module) import some_module from ... import some_useful_module def some_func(): pass def some_other_func(): pass # No remove_undocumented. foo.py: (internal implementation module which requires some functions defined in gradients_impl. We make sure that foo.py is never imported in the tf namespace, directly or transitively: from tensorflow...import gradients_impl def foo(x): return gradients_impl.my_internally_useful_function(x + 1) There are no import * in gradients_impl. Here's an example module using the second variant (dubbed the fused api/implementation). logging.py: """Logging things @@print_stderr """ import sys as _sys def print_stderr(msg): _sys.stderr.write(msg) remove_undocumented(__name__, []) To recap, (after the change is fully complete) As a tensorflow user, (minus a handful of exceptions such as tfdbg), you are only allowed to use import tensorflow as tf As a tensorflow developer, you are encouraged to use from tensorflow... import ... in your modules As a tensorflow developer, you are not allowed to use import * in your implementation modules, only in API modules So, more or less our internal code remains the same, except that we have to be careful in a handful of modules (perhaps 20 or so) As a tensorflow user, you no longer have access to internal modules that have not been explicitly exported via a whitelist or a documentation reference. HTH Thank you very much for a detailed explanation. Is this information anywhere in the documentation? No, we don't have design docs out there. It might be good to have somewhere close to the style guide, though. Ultimately, if you mimic our code, then it should work; if you don't, then we should be able to catch that automatically, at some point. I think we can close this issue due to a) No new comments in the past 2+ months b) This is not really an issue, and the documentation is work in progress.
GITHUB_ARCHIVE
Oracle creating Database link I am using Oracle 11g express edition. I have created tables, stored procedures and it works fine. I have my user "System" with password "xyz" (main user during installation). Then i have created two databases "abc" and "pqr" with same user. I wanted to create database link from abc to pqr. create database link testlink connect to pqr identified by xyz using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)) (CONNECT_DATA=(sid=xe)))'; I am getting error "Insufficient privileges". Please help me out. Did you mean created two users? Instead of databases? If so check that whether the user create database link has the CREATE DATABASE LINK system privilege and the connecting user has CREATE SESSION system privilege. It should be CONNECT TO username not the database name as shown in the following image which describes the syntax of CREATE DATABASE LINK. We define database instance/service under USING connect_string clause. Prerequisites To create a private database link, you must have the CREATE DATABASE LINK system privilege. To create a public database link, you must have the CREATE PUBLIC DATABASE LINK system privilege. Also, you must have the CREATE SESSION system privilege on the remote Oracle database. Reference:CREATE DATABASE LINK Demo [oracle@orcl Desktop]$ sqlplus system/oracle SQL> create user abc identified by abc; User created. SQL> create user xyz identified by xyz; User created. SQL> grant create session to abc; Grant succeeded. SQL> conn abc/abc Connected. SQL> create database link testlink connect to pqr identified by pqr using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl.dba.com)(PORT=1522)) (CONNECT_DATA=(service=orcl)))'; create database link testlink connect to pqr identified by pqr using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl.dba.com)(PORT=1522)) (CONNECT_DATA=(service=orcl)))' * ERROR at line 1: ORA-01031: insufficient privileges SQL> conn system/oracle Connected. SQL> grant create database link to abc; Grant succeeded. SQL> create database link testlink connect to pqr identified by pqr using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl.dba.com)(PORT=1522)) (CONNECT_DATA=(service=orcl)))';^[[3~^C SQL> conn abc/abc Connected. SQL> create database link testlink connect to pqr identified by pqr using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl.dba.com)(PORT=1522)) (CONNECT_DATA=(service=orcl)))'; Database link created. yes when i try to execute grant create database link i get same error message. How Do I give system privilege? Connect as sys user with sysdba privilege then try to grant the required privileges. For example: sqlplus sys as sysdba then SQL> grant ... Here main issue is i cannot login as sys user also. I tried sys and sys and other combinations too. @poonam: Then connect as SYSTEM user and grant CREATE DATABASE LINK to abc user. Next, connect as abc user and create the database link. Refer to the demo. Public database links are a security vulnerability. Use them with caution.
STACK_EXCHANGE
Links to useful free security related tools. Tools that don’t fit elsewhere. - GCHQ CyberChef – The Cyber Swiss Army Knife. - hash-id – hash-id is a command line program for identifying hash types based on Zion3R’s implementation. - socialhunter – Crawls the given URL and finds broken social media links that can be hijacked. Tools for looking up file hashes. - CIRCL Hash Lookup – Lookup hash against databases of known files. API available. Malware sandboxes, virus scanners and more. - Virus Total – Upload a file or submit a URL for checking against multiple AV products. - Hybrid Analysis – Malware sandbox. - ANY.RUN – Malware sandbox. Free lookups subject to limitations. - CAPE Sandbox – Malware sandbox. Service seems to be down currently. - Dragonfly – Automated malware sandbox. Dragonfly is unique in that it is built over different emulation engines and allows customization of entire operating systems and the rules used to hunt malware. Post Exploitation Tools - LaZagne (Windows) – Dump various passwords such as from web browsers, email clients and more. - Chainsaw (Windows) – Quickly identify threats using event logs with Sigma detection rules. - The Logfile Navigator (lnav) – The Log File Navigator, lnav for short, is an advanced log file viewer for the small-scale. It is a terminal application that can understand your log files and make it easy for you to find problems with little to no setup. - Angle Grinder – Angle-grinder allows you to parse, aggregate, sum, average, min/max, percentile, and sort your data. You can see it, live-updating, in your terminal. Angle grinder is designed for when, for whatever reason, you don’t have your data in graphite/honeycomb/kibana/sumologic/splunk/etc. but still want to be able to do sophisticated analytics. - Sysmon for Linux – Linux version of sysmon tool by Microsoft to aid in discovering compromised systems. Released very recently. - OWASP Amass Project – The OWASP Amass Project performs network mapping of attack surfaces and external asset discovery using open source information gathering and active reconnaissance techniques. - Smap – Drop in replacement for nmap using the free Shodan API. - Smithproxy – Smithproxy is highly configurable, fast and transparent TCP/UDP/TLS (SSL) proxy written in C++17. - Linux Cat Scale – Linux CatScale is a bash script that uses live of the land tools to collect extensive data from Linux based hosts. The data aims to help DFIR professionals triage and scope incidents. An Elk Stack instance also is configured to consume the output and assist the analysis process. - dnstwister – Search for typo squatting, potential phishing domains and potential IP violations of your domains. - dnstwist – Similar to the above dnstwister web based tool. - Cert Eagle – Monitor certificate transparency logs for domains/subdomains and receive alerts. - Certstream – Monitor certificates being issued from certificate transparency logs in real time - nrich – A command-line tool to quickly analyze all IPs in a file and see which ones have open ports/ vulnerabilities. Can also be fed data from stdin to be used in a data pipeline. - ipinfo-cli – Command line tool to make lookups to the ipinfo API. - subfinder – DNS subdomain discovery tool. - Brida – Brida is a Burp Suite Extension that, working as a bridge between Burp Suite and Frida, lets you use and manipulate applications’ own methods while tampering the traffic exchanged between the applications and their back-end services/servers. It supports all platforms supported by Frida (Windows, macOS, Linux, iOS, Android, and QNX). - Interactsh – Interactsh is an Open-Source Solution for Out of band Data Extraction, A tool designed to detect bugs that cause external interactions, For example – Blind SQLi, Blind CMDi, SSRF, etc. - HOUDINI – Collection of various Docker images for penetration testing and auditing. Exfil, C&C and Connectivity - dnscat2 – Tunnel traffic via DNS. Unlike Iodine and other such tools its primary purpose is for C&C. - Global Socket (gsocket) – Establish an encrypted TCP connection between two endpoints even with one or both endpoints being behind a firewall or NAT. - Fast Reverse Proxy (frp) – Expose a server behind NAT and/or a firewall to the internet. Supports both TCP and UDP.
OPCFW_CODE
“SQL, Lisp, and Haskell are the one programming languages that I’ve seen the place one spends extra time considering than typing.” – Philip Greenspun Even with considering greater than typing SQL (Structured Query Language) we software program engineers use it as a method to pull information solely. We normally do not leverage SQL’s energy of knowledge manipulation and do the wanted modifications in code. This is sort of prevalent in software program engineers who work in internet purposes. This submit goals to enlighten you concerning the powers of SQL you may know however usually do not use. Use SQL to do math like sum, common and many others. Utilize it for grouping one to many relational values like getting classes of product. Leverage SQL for string manipulation like utilizing CONCAT_WS for concating first title and final title. Exploit SQL to kind by a customized precedence components. Examples beneath… The Example # It shall be simpler to elucidate the superpowers of SQL placing it in motion on an instance. Below is a primary schema with 2 tables in MYSQL for a refunds microservice: There are 2 refunds and seven associated funds as instance data. Some assumptions # For the refunds microservice instance schema and purposes following assumptions are made: - Refunds microservice and information construction retailer the fk_item (the id of the ordered/delivered merchandise), however it isn’t a tough international key. - Item can be refunded in both money or credit score for the quantity paid for the identical. - Items can be refunded many occasions so long as remaining steadiness can cowl requested refund quantity for every money and credit score. For instance, merchandise was paid 50 in money and 50 in credit score. 2 refunds of 20 money and 20 credit score can be executed. So after these transactions steadiness shall be 10 money and 10 credit score for that merchandise (50-20-20). - Each refund can have a number of objects cost. Each cost can be of sort both money or credit score. - All quantities are saved in cents so they’re integers. Now let’s use some SQL powers. You can discover the instance with associated queries working on SQL Fiddle. Do the maths in SQL # As software program engineers, for instance if we have to discover the whole money and credit score quantity refunded for an merchandise what would we do? We would run one thing like: SELECT fk_item, fk_refund, quantity, is_cash FROM cost WHERE fk_item=2001; With present information, it will give three rows like beneath: With these three rows, we might loop over them. If it is money accumulate it to cashBalance variable, if not sum it as much as creditBalace variable. Rather than that it can be loads simpler (most likely sooner) to do in SQL like: SELECT fk_item, SUM(quantity) AS total_paid, IF(is_cash = 1, 'money', 'credit score') as sort FROM cost WHERE fk_item = 2001 GROUP BY fk_item, is_cash; The result’s simple now for those who want the whole refund for the merchandise simply change the GROUP BY to be on fk_item and it’s executed. For 2 and three data it will not really feel vital. If there have been say 20 refunds for that merchandise, the primary answer with a loop is writing extra code with no acquire. Like sum, different SQL capabilities can be used too. Simple math operations like sum, multiply, average and many others can be simple with SQL. This means no extra loops. Use GROUP_CONCAT to fetch associated 1:m relation values # Group concat is a robust operation in SQL databases. It may be very helpful when it’s essential to get information from one to many relationship. For occasion, you need to get all tags for a weblog submit otherwise you need to get all classes of a product. Concerning this refunds instance, one merchandise can be refunded a number of occasions. So we are going to get all of the refunds related to the merchandise id. To get this we are going to run just one question and get it with none loops in the code like beneath: SELECT fk_item, GROUP_CONCAT(DISTINCT fk_refund) refund_ids FROM cost WHERE fk_item = 2001; This outcomes in: Now we all know that merchandise 2001 has been refunded twice for 2 refunds. It shall be simple to blow up the refund Ids with , and proceed with any associated operation. String manipulation # Many string manipulation duties like substring, concatenation, change case, and string examine can be executed in SQL. With this instance, I’m going to point out the utilization of CONCAT_WS. It is concat with a separator. It can even be used to pick for occasion first_name and last_name with area in between. In case of getting an optionally available center title COALESCEcan be used with CONCAT_WS. That is one thing for you to discover :). In this instance, I’ll choose refund_nr with it’s associated motive: SELECT CONCAT_WS("-", refund_nr, motive) AS refund_nr_with_reason FROM refund; If this must be proven on the credit score be aware doc, for instance, no extra code is required to hitch the values once more. SQL makes it one step simpler once more. Sorting with a customized components # All software program engineers know you can kind based mostly on a column. But if you’re given a customized precedence components to kind, what would you do? Probably once more resort again to code and loop to kind. So lets set the precedence components guidelines for above instance: - Premium buyer refunds get the best precedence (we hack it with a precedence of 9999999999) - Other than premium prospects money refunds get a precedence of quantity * 25 for credit score it is quantity * 20. As per above guidelines it is determined that premium prospects and precedence above 50000 (in cents) shall be processed first. Then different refunds shall be processed. Let’s get the precedence refunds as beneath: SELECT r.refund_nr, r.motive, p.fk_item, p.quantity, p.is_cash, IF(p.premium_customer = 1, 9999999999, p.quantity * (IF(is_cash = 1, 25, 20))) AS precedence FROM refund AS r INNER JOIN cost AS p ON r.id = p.fk_refund HAVING precedence > 50000 ORDER BY precedence DESC The outcomes are beneath: With correct use of IF in SQL sorting by a customized precedence components is loads simpler than attempting to do it with loops in code. Notice that even smaller quantities like 7.5 (750 cents) and 9.0 (900 cents) got here to highest precedence as these refund cost quantities have been related to premium prospects. Use the superpowers of SQL to make your life simpler as a software program engineer. You can play with the instance and run your individual queries on SQL fiddle. There are different methods of SQL that can aid you as a software program engineer. Like ON DUPLICATE KEY UPDATE. Whenever you’ve got an itch of doing a little manipulation for information pulled in from database in code with loops, suppose once more. The primary takeaway from this story is: Exploit the ability of SQL to write down much less code as a result of “the very best code is the code that was by no means written”. If it isn’t written there isn’t a want to take care of it.
OPCFW_CODE
Chad Hobson ’17 and Zach Betterton ’17 spent last semester building one device after another: a traffic light with a button pedestrians can push for a safe crossing. A fan that can make a golf ball hover at a steady height. A set of sensors that graph real-time temperature data on a computer screen. The crowning achievement of their electronics class was a robot that can “hear” its way around obstacles in a simple maze. They joked about nicknaming the robot “Wall-E” because its ultrasonic sensor resembles the eyes of the robot character from a popular Pixar film. The sensor connects to an Arduino circuit board that controls several motors mounted to a chassis the students took out of an old robot. Ordinarily, the robot goes straight forward. When the ultrasonic sensor detects an object within one foot ahead, the robot pauses and the sensor turns to the left and the right to determine which direction might be clear. Throughout the semester, their challenges helped them learn about teamwork, problem solving, and how to apply what they have learned in the classroom to impact something in the physical world. Physics professor Dr. William Roach created a course on electronics to help students combine what they learn about basic circuits in one class and programming in another. The class explored “How you make software talk to hardware to actually do something useful, and how you go about automating a task,” he said. Chad has worked on research projects with computer programming, but he had not built a device before. “Most programming I’ve done has all been about generating some numbers on the screen,” he said. “Typing some code and actually doing some physical work was really cool to me.” Zach, who plans to become a teacher, was grateful for the chance to work on hands-on project that he could use in his own classroom someday. Being one of two students in the course, he also valued the opportunity to work closely with Dr. Roach and learn more about developing a curriculum. The project tested the students’ ability to solve problems. “The first step when we got a project each week was sitting down and figuring out what you need to code,” Zach said. “The Arduino will do a lot of things, but you’ve got to tell it everything it needs to do.” Chad said that working with Zach helped him improve his ability to work on a team — something that internships have taught him will be very important in real-world applications. “No matter where you go, you’re going to work in a team,” he said. “If you go into a job where you would be doing automation, everything that’s going to be automated now is difficult to automate, so it requires teams. This was a good chance to learn how to program alongside someone else working on the same thing.” During the fall semester, Zach and Chad got the robot to work in a very simple maze made with a few textbooks propped up on a table. In a complicated labyrinth, the robot likely would get lost or perhaps turn around and exit where it entered. This semester, Zach is working to enhance the robot with maze-solving algorithms, allowing it to recall its path and find its way through a full maze.
OPCFW_CODE
use downstream config to compile ephemeral models Describe the feature I'm building ephemeral models with a lot of transformations. Ideally, I'd like to apply the is_incremental() statement within ephemeral models at specific places and logic to be applied or not based on downstream model configurations. I tuned the is_incremental() macro but wasn't able to extract downstream information when compiling ephemeral models. Describe alternatives you've considered Being able to call downstream model configurations and relations in ephemeral models. Additional context Who will this benefit? Would benefit anyone trying to replace CTEs by ephemeral models. Are you interested in contributing this feature? Is_incremental() tuned: {% macro is_incremental(downstream_relation = none, downstream_config = none) %} {#-- do not run introspective queries in parsing #} {% if not execute %} {{ return(False) }} {% elif model.config.materialized == 'ephemeral' %} {{return(downstream_relation is not none and downstream_relation.type == 'table' and downstream_config.materialized == 'incremental' and not flags.FULL_REFRESH)}} {% else %} {% set relation = adapter.get_relation(this.database, this.schema, this.table) %} {{ return(relation is not none and relation.type == 'table' and model.config.materialized == 'incremental' and not flags.FULL_REFRESH) }} {% endif %} {% endmacro %} Original macro: {% macro is_incremental(downstream_relation = none, downstream_config = none) %} {#-- do not run introspective queries in parsing #} {% if not execute %} {{ return(False) }} {% set relation = adapter.get_relation(this.database, this.schema, this.table) %} {% else %} {{ return(relation is not none and relation.type == 'table' and model.config.materialized == 'incremental' and not flags.FULL_REFRESH) }} {% endif %} {% endmacro %} Here I need to retrieve downstream_relation and downstream_config each time an ephemeral model is compiled. We could probably do this in a macro but it's not really user friendly. @lambert-lemanh-cko I've wanted to do exactly this on several previous occasions. I've felt this pain, wanting to split out incremental model logic into more modular components. That said, I don't think it's something we should do. When a model wants to know something about one of its parents—a config value, the list of columns, the number of rows in the database, whatever—it should use the ref() function as its first pointer to that parent model. Whereas, if a model wants to know something about a downstream model, and tries to ref() it, it would result in a circular dependency... as it should. Nodes in a DAG cannot know anything about the nodes that depend on them downstream, else the graph is really cyclic. So I think your options in this case are: Wrap the ephemeral model reference with an is_incremental() filter within your incremental model, and see if your database optimizer is smart enough to push down the predicate, despite the (nested) CTEs. Ideally, it is. Use a macro. We could probably do this in a macro but it's not really user friendly. I hear you on preferring to define core model logic in model SQL, rather than a macro. Still, a macro accurately captures what you're after: you have model SQL that you want to repurpose, with dynamic inputs and modifications each time. I'm going to close this issue, only because I don't see us making a dbt code change. I'm interested to keep the conversation going, and to see if we can come up with more compelling answers here. Hi @jtcohen6, Thanks for your answer. Yes unfortunately Snowflake is not smart enough to apply the filter at the beginning. It will end up scanning the whole dataset. Fair enough. I was thinking of adding optional arguments in the {{ ref() }} macros to parse through information to upstream models. Or maybe if {% set %} could be defined in a model but reused in a upstream/downstream model. I was thinking of adding optional arguments in the {{ ref() }} macros to parse through information to upstream models. Or maybe if {% set %} could be defined in a model but reused in a upstream/downstream model. You can certainly use static run-level information, such as vars or flags.FULL_REFRESH, to inform models at every stage along the DAG. An upstream ephemeral model could check the full-refresh status to determine its filtering. But even that will miss some edge cases, such as if it's a standard (incremental) run, but the downstream incremental model's associated table is missing from the database. I was thinking of adding optional arguments in the {{ ref() }} macros to parse through information to upstream models. Or maybe if {% set %} could be defined in a model but reused in a upstream/downstream model. You can certainly use static run-level information, such as vars or flags.FULL_REFRESH, to inform models at every stage along the DAG. An upstream ephemeral model could check the full-refresh status to determine its filtering. But even that will miss some edge cases, such as if it's a standard (incremental) run, but the downstream incremental model's associated table is missing from the database. Ideally we need a variable dynamically defined defined by the dowstream model and parse through the ephemeral model when compiled. Could we not have something like this: select * from {{ ref(ephemeral_model), config_arg = model.config, relation = this) }} {{config_arg}} and {{relation}} can then be used when compiling the ephemeral_model. This should also not create cyclic relationships.
GITHUB_ARCHIVE
Xpra: Ticket #1670: html5 2.3 updates Follow up from #1581 Tue, 24 Oct 2017 16:12:45 GMT - Antoine Martin: status, milestone changed changed from new to assigned changed from 2.2 to 2.3 Link to commit log: log/xpra/trunk/src/html5. - r17697: ensure the server sends BGR(A) input to the webp encoder (we can't swap channels with the HTML5 client) Fri, 19 Jan 2018 13:54:10 GMT - Antoine Martin: description changed Sat, 20 Jan 2018 15:47:21 GMT - Antoine Martin: description changed - r18080 adds a top bar where we dock system tray icons. - r18206 smarter window placement, add space at the top for the top-bar - r18084 fix painting of windows with transparency (backported in r18085) - hide the top bar by default, add a connect page option to show it - add arrow handles to show and hide it - automatically show it when a new tray icon gets docked, automatically hide it when the last one is removed? - add configuration menus, just like the other clients - "default" icons would be nice to have, see r18191 Thu, 01 Mar 2018 04:07:47 GMT - Antoine Martin: Mon, 19 Mar 2018 12:10:21 GMT - Antoine Martin: - minor tweaks: r18724, r18734 - allow pathname to be specified: r18725 + r18739, useful when using a proxy, see wiki/Nginx - top bar, see #1471 - handle switching tabs and back: r18733 - better pointer position handling: r18766 (prevents it from lingering on the last window position) Thu, 22 Mar 2018 04:30:07 GMT - Antoine Martin: Will follow up in #1788. Sun, 01 Apr 2018 13:26:53 GMT - Antoine Martin: owner, status changed changed from Antoine Martin to J. Max Mena changed from assigned to new Mostly a FYI. You must be handling the tab switching refresh rate management already, right? Some of the other items are worth a look too. Tue, 10 Apr 2018 07:15:17 GMT - Antoine Martin: Wed, 16 May 2018 06:33:03 GMT - Antoine Martin: Follow up for 2.4: #1788 Fri, 01 Jun 2018 11:46:54 GMT - Antoine Martin: status changed; resolution set changed from new to closed set to worksforme Sat, 23 Jan 2021 05:30:39 GMT - migration script: this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1670
OPCFW_CODE
The advantages of solution-adaptive refinement, when used properly as in the turbine cascade example in Section 26.1.1, are significant. However, this capability must be used carefully to avoid certain pitfalls. Some guidelines for proper usage of solution-adaptive refinement are as follows: The surface mesh must be fine enough to adequately represent the important features of the geometry. For example, it would be bad practice to place too few nodes on the surface of a highly-curved airfoil, and then use solution refinement to add nodes on the surface. The surface will always contain the facets contained in the initial mesh, regardless of the additional nodes introduced by refinement. The initial mesh should contain sufficient cells to capture the essential features of the flow field. Consider the following example, in which you want to predict the shock forming around a bluff body in supersonic flow. To obtain a reasonable first solution, the initial mesh should contain enough cells and also have sufficient resolution to represent the shape of the body. Subsequent gradient adaption can be used to sharpen the shock and to establish a grid-independent solution. Polyhedral cells are not eligible for adaption. The presence of polyhedral cells in a mesh may or may not limit the eligibility of other cells for adaption, depending on the manner in which the polyhedral cells were created: If the domain was converted to polyhedra (see Section 6.7.1), then no part of the mesh can be adapted (even if hexahedral cells are present in the mesh after conversion). If the polyhedra are a result of converting skewed tetrahedral cells (see Section 6.7.2) or converting the transitional cells of a hexcore mesh (see Section 31.5.2), then the nonpolyhedral cells may be adapted. The polyhedral cells, however, will be automatically unmarked from the register when adaption is initiated and will remain unchanged. Obtain a reasonably well-converged solution before performing an adaption. If you adapt to an incorrect solution, cells will be added in the wrong region of the flow. Use careful judgment in deciding how well to converge the solution before adapting, because there is a trade-off between adapting too early to an unconverged solution and wasting time by continuing to iterate when the solution is not changing significantly. This does not directly apply to dynamic adaption, because here the solution is adapted either at every iteration or at every time-step, depending on which solver is being used. Write a case and data file before starting the adaption process. If you generate an undesirable mesh, you can restart the process with the saved files. This does not directly apply to dynamic adaption, because here the solution is adapted either at every iteration or at every time-step, depending on which solver is being used. Select suitable variables when performing gradient adaption. For some flows, the choice is clear. For instance, adapting on gradients of pressure is a good criterion for refining in the region of shock waves. In most incompressible flows, however, it makes little sense to refine on pressure gradients. A more suitable parameter in an incompressible flow might be mean velocity gradients. If the flow feature of interest is a turbulent shear flow, it will be important to resolve the gradients of turbulent kinetic energy and turbulent energy dissipation, so these might be appropriate refinement variables. In reacting flows, temperature or concentration (or mole or mass fraction) of reacting species might be appropriate. Do not over-refine a particular region of the solution domain. It causes very large gradients in cell volume. Such poor adaption practice can adversely affect the accuracy of the solution.
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Xamarin.CommunityToolkit.Effects; using Xamarin.Forms; using Xamarin.Forms.Xaml; namespace MenuBottomSample { [XamlCompilation(XamlCompilationOptions.Compile)] public partial class MenuBottomPage : ContentPage { private TaskCompletionSource<String> _taskCompletionSource; public MenuBottomPage(string title, List<String> actions, List<String> redActions) { InitializeComponent(); this.BackgroundColor = Color.Transparent; //Evento Tapped para quando clicar fora do menu var closeGestureRecognizer = new TapGestureRecognizer(); closeGestureRecognizer.Tapped += async (s, e) => { await Navigation.PopModalAsync(false); }; //Linha para separar os menus BoxView boxLine = new BoxView() { VerticalOptions = LayoutOptions.FillAndExpand, HorizontalOptions = LayoutOptions.FillAndExpand, BackgroundColor = Color.FromHex("#20000000") }; //Grid principal da tela Grid gridMain = new Grid() { VerticalOptions = LayoutOptions.FillAndExpand, HorizontalOptions = LayoutOptions.FillAndExpand, }; //BoxView invisivle para o Menu BoxView boxView = new BoxView() { VerticalOptions = LayoutOptions.FillAndExpand, HorizontalOptions = LayoutOptions.FillAndExpand, BackgroundColor = Color.FromHex("#20000000") }; boxView.GestureRecognizers.Add(closeGestureRecognizer); //Grid para desenhar o menu Grid gridMenu = new Grid() { RowDefinitions = { new RowDefinition { Height = new GridLength(0, GridUnitType.Auto) }, new RowDefinition { Height = new GridLength(0, GridUnitType.Auto) }, }, ColumnDefinitions = { new ColumnDefinition() }, Margin = new Thickness(8), ColumnSpacing = 0, VerticalOptions = LayoutOptions.End, HorizontalOptions = LayoutOptions.FillAndExpand, Padding = 0 }; //Frame para o menu de ações Frame frameMenu = new Frame() { VerticalOptions = LayoutOptions.CenterAndExpand, BackgroundColor = Color.FromHex("#FFF"), CornerRadius = 8, Opacity = 0.9, Padding = new Thickness(0) }; //Grid para os itens dos menus Grid gridMenuItems = new Grid() { Margin = new Thickness(0), RowSpacing = 0 }; //Montagens do Grid para as linhas do menu RowDefinitionCollection definitions = new RowDefinitionCollection(); for (int i=0;i<actions.Count+redActions.Count+2;i++) { definitions.Add(new RowDefinition() { Height = new GridLength(0, GridUnitType.Auto) }); definitions.Add(new RowDefinition() { Height = new GridLength(0, GridUnitType.Auto) }); } gridMenuItems.RowDefinitions = definitions; //Montagens do leiaute para o título StackLayout titleStackLayout = new StackLayout() { HorizontalOptions = LayoutOptions.FillAndExpand, Spacing = 0, Padding = new Thickness(0, 4) }; Label titleLabel = new Label() { Text = title, FontSize = 16, TextColor = Color.Gray, FontAttributes = FontAttributes.Bold, HorizontalOptions = LayoutOptions.CenterAndExpand, HorizontalTextAlignment = TextAlignment.Center, VerticalTextAlignment = TextAlignment.Center }; titleLabel.GestureRecognizers.Add(closeGestureRecognizer); titleStackLayout.Children.Add(titleLabel); gridMenuItems.Children.Add(titleStackLayout); gridMenuItems.Children.Add(BoxLine(), 0, 1); //Desenho dos itens dos menus int row = 2; foreach (var text in actions) { gridMenuItems.Children.Add(MenuItem(text, Color.Blue), 0, row++); gridMenuItems.Children.Add(BoxLine(), 0, row++); } foreach (var text in redActions) { gridMenuItems.Children.Add(MenuItem(text, Color.Red), 0, row++); gridMenuItems.Children.Add(BoxLine(), 0, row++); } //Desenho dos menu cancelar Frame frameCancel = new Frame() { CornerRadius = 8, Padding = new Thickness(0), Margin = new Thickness(0) }; frameCancel.Content = MenuItemCancel("Cancel"); //Montagem dos filhos do Content frameMenu.Content = gridMenuItems; gridMenu.Children.Add(frameMenu); gridMenu.Children.Add(frameCancel, 0, 1); gridMain.Children.Add(boxView); gridMain.Children.Add(gridMenu); this.Content = gridMain; } private Task<String> GetValue() { _taskCompletionSource = new TaskCompletionSource<String>(); return _taskCompletionSource.Task; } public static async Task<String> ShowMenu(INavigation navigation, string title, List<String> action, List<String> redActions) { var viewModel = new MenuBottomPage(title, action, redActions); await navigation.PushModalAsync(viewModel, false); var value = await viewModel.GetValue(); await navigation.PopModalAsync(false); return value; } private BoxView BoxLine() { return new BoxView() { BackgroundColor = Color.Gray, HorizontalOptions = LayoutOptions.FillAndExpand, HeightRequest = 0.1, }; } private StackLayout MenuItem(string text, Color textColor) { var tapGestureRecognizer = new TapGestureRecognizer(); tapGestureRecognizer.Tapped += async (s, e) => { var stack = (StackLayout)s; stack.Opacity = 0; stack.BackgroundColor = Color.FromHex("#f5f5f5"); await stack.FadeTo(1, 100); var item = (Label)stack.Children.First(); _taskCompletionSource.SetResult(item.Text); _taskCompletionSource = null; }; var stackLayout = new StackLayout() { HorizontalOptions = LayoutOptions.FillAndExpand, Padding = new Thickness(0, 12), Spacing = 0 }; var label = new Label() { Text = text, FontSize = 18, HorizontalOptions = LayoutOptions.CenterAndExpand, HorizontalTextAlignment = TextAlignment.Center, VerticalOptions = LayoutOptions.CenterAndExpand, TextColor = textColor }; stackLayout.GestureRecognizers.Add(tapGestureRecognizer); stackLayout.Children.Add(label); var teste = new TouchEffect(); return stackLayout; } private StackLayout MenuItemCancel(string text) { var tapGestureRecognizer = new TapGestureRecognizer(); tapGestureRecognizer.Tapped += async (s, e) => { var stack = (StackLayout)s; stack.Opacity = 0; stack.BackgroundColor = Color.FromHex("#f5f5f5"); await stack.FadeTo(1, 100); await Navigation.PopModalAsync(false); }; var stackLayout = new StackLayout() { HorizontalOptions = LayoutOptions.FillAndExpand, VerticalOptions = LayoutOptions.CenterAndExpand, Padding = new Thickness(0, 12), Spacing = 0 }; var label = new Label() { Text = text, FontSize = 18, HorizontalOptions = LayoutOptions.CenterAndExpand, HorizontalTextAlignment = TextAlignment.Center, VerticalOptions = LayoutOptions.CenterAndExpand, TextColor = Color.Blue }; stackLayout.GestureRecognizers.Add(tapGestureRecognizer); stackLayout.Children.Add(label); return stackLayout; } } }
STACK_EDU
What do you organize around if our goal is to organize around customer value? Well, it’s not super clear to start off with. You have candidates. So, if there are common services or feature sets or products or something like that, that you can naturally group into, that’s often a pretty obvious pattern. There’s still a lot of times dependencies and coordination and things like that. But if you’re in a relatively small organization or there’s parts of your app that can be carved off into really customer identifiable segments, cool, go do that. Lot of the big organizations we’re working with a couple fortune, 10 kinds of companies. And these are huge tens of thousands of people that are interacting in very, very complex systems. And so, like the inclination, if you can’t get to a product intuitively is to start maybe hunting a value stream. Well, the problem with value stream and organizing around value streams is that often the value streams intersect with your organization and the technology in ways that actually create even more dependencies. And so, the goal of agile transformation is to really create autonomous units that don’t have dependencies between them. That’s the Holy grail of where we’re heading. And so, we want to organize in a way that gives us the possibility of ultimately transforming into a value streamed or a product aligned organization, because right now, often we’re in a project aligned organization. And so, oftentimes the starting place for organizational structure is to look at the business architecture using a technique called business capability modeling. So, but in a nutshell, what we’re looking for, so we’ll come in early into an engagement and we’ll do a business architecture, business capability analysis of your organization. And what we’re hunting, the ideal candidate for a scrum team, in a factor, a scrum based, or a safe based organization is the intersection of dedicated people, a singular business problem, and dedicated technology. It can be all kinds of different things depending upon the organization. But you think about it, when I talk about a scrum team needing to be six to eight people, want to have everything, and everybody necessary to do it, I want to have few dependencies. I have to have a dedicated team. I have to have ownership of the technology, and there has to be, I’m going to say, a clear backlog. But that clear backlog has to be focused on something that a product owner can own. That’s the ideal, that’s the Holy grail here. Very seldomly in large organizations, right out of the gate, is that a product or is it a value stream? It’s just usually not. And so, the thing that we hunt, like I said, is typically the business capability. So, what you can do is you can start to look at your organization as a map of different kinds of business capabilities. And those business capabilities can basically decompose infinitely, but we can start to look at all of the different business capabilities, the people that are supporting them, how they’re performing where the risk is, all that kind of stuff. And this is a way that most organizations are actually from, a lot work with big consultancies and things like that, they’re used to looking. They don’t always know quite what to do with their business capability models, but they understand that in order to do something for this user out here, I need to have something from that business capability, something from that business capability, something from that business capability, something from that business capability, and something from that business capability. And the user actually takes a journey through all of these different parts to get anything out of it that they actually want. And so, just as a quick aside, one of the interesting things about doing this analysis is that you’ll often find that a lot of organizations have a lot of duplicated business capabilities. And so, they’ll have a business capability, sometimes those business capabilities are supported by multiple technology platforms, multiple teams, things like that. So, by looking at your organization through a business capability lens, one of the things that often gives you the advantage of doing is being able to find places to drive efficiency. So, if you know that you’ve got a business capability over here, that’s in effect the same of the business capability over here, same as business capability over here, oftentimes you can group those into a singular team, rationalize the technology, architecture, rationalize the staffing, all that kind of stuff. And you end up with one business capability that’s doing something for the entire organization. That’s a possibility. So, often, what ends up happening is that as you’re creating these product owner teams, they are interfacing with other business capability focused teams. So, you end with this structure that I’ve been talking about that’s largely focused around business capabilities. So, that often is a really good starting place in a large complex organization to start to get work to flow through things.
OPCFW_CODE