Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Starting AWS architecture: auto-scaling Auto-scaling is a key AWS Cloud (or any Cloud) architecture principle. It is the process of elastically & dynamically changing the number of servers available for an application based on the users demands. I’m relatively new to cloud architecture, so I’d be delighted with any feedback on whether I have got things right (or wrong for that matter!). This is a sanitized version of a proposal we’ve made to improve the user experience at a client while simultaneously lowering costs by using auto-scaling. The client as part of their operations have teams in different parts of the world. As a result there are some servers that are heavily utilized for different core hour sets. We proposed switching from a large server instance to a fleet of smaller instances & utilizing load balancing & auto scaling to optimize both user experience & cost. This should greatly improve user experience?—?as the large server was grinding to a halt at times?—?but we’re unsure until we test usage patterns if this will result in any more than a modest nett fall in AWS billing. The load balancer ($30) will be a new billable item, AWS auto-scaling is a free item. The basis of the proposal was to make the architecture more complex (a single server being used is pretty simple!) but no more costly & to deliver greatly enhanced user experience. The client had previously scaled up servers (moved to bigger ones) to try to alleviate the usage problems. But in tech terms demand expands to fill the available supply & all that. So the proposal was to stop scaling up to larger instances, but to actually scale down to smaller server instances & use astandard AWS Cloud pattern of utilizing Amazon Cloudwatch to monitor for occasions when CPU maxes out over a prolonged period & spin up new instances (based on pre-built Amazon Machine Instances (AMIs)) when the alarms are raised. Before shutting down server instances to save money when demand falls. Using auto-scaling will allow us to use smaller server instances so that at quiet times there will be a lower bill & at busy times there will be better user experience. CPU Utilization when auto-scaling to one or more small servers (made up numbers!) This contrasts with the standard scale up approach where a server instance keeps getting bigger in an attempt to provide service at all times, but at quiet times the server basically idles along. Single large server CPU utilization (made up numbers!) Thanks for reading this small foray into AWS architecture. I am the APN Alliance lead for Agidea (www.agidea.uk), a technology consultancy based in Manchester with several years of experience developing .Net applications on AWS. I’ve been tasked with getting Agidea promoted to the Select Partner Tier over the next three month & we’re on target! I have some AWS certifications & accreditations & I’m working towards a Solutions Architect certification: AWS Certified Cloud Practitioner AWS Accredited Business Professional AWS Accredited Technical Professional AWS Accedited TCO & Cloud Economics This was a sanitized version of a proposal to improve the user experience at a client while simultaneously lowering costs by using auto-scaling. Hope it helps.
OPCFW_CODE
Hi, I have been trying to use Solutions for better management of Flows. For instance, packing 5 different Flows built for an application, into a Solution. Let's say I now want to deploy the solution from dev to test environment. Everything works fine the first time, when target environment doesn't have this solution yet (simple export -> import). Problems start when I edit 1 of the Flows in Dev and then would like to apply that update to Test env. I changed content of a 'compose' action and tried 2 approaches: 1) Publish customizations -> export solution from dev -> import solution in test The whole process seems to go fine, my existing solution in Test gets a new version number. But the 'compose' action has not changed at all. It's still the old version. 2) Instead of exporting the whole solution: export just the Flow I had edited from the Solution in Dev. - There is no option to import the new version of existing Flow into the solution from a .zip file - If I try to import it as a normal Flow and select 'Update' - it doesn't see Flows in Solutions, so I can't select the one I want - If I import it as a normal 'new' flow, I can then add it to the solution, but it doesn't replace the one that is already there, but creates a new one. So this doesn't seem to be the correct process at all. What is the best practice in such case? Is there no way to update an existing solution with changes from different environment so it applies all changes inside a Flow? I have run into the same thing. This seems to be a limitation on the way solutions are handled at this time. I do believe the product team (or someone) is working on a process to automate the deployments using DevOps CICD pipelines. This will require the Flows to be in the solution. As of right now. Me and my organization stay away from putting flows into a solution unless we have to: (Common data service current environment). Even in that case we can create the flow in the solution, than export out so it is not in one. If you like my post please hit the "Thumbs Up" -- If my post solved your issue please "Mark as a Solution" to help others Thanks Josh, if that's the case, using Solutions would make no sense. But managing tens (or more) various Flows for different purposes in one long list, without even grouping those, is not good either. I'm still curious of experiences of other community members. I am running into this exact issue right now. I make changes in my DEV environment and upon importing, nothing is changed despite the modified column indicating it is new. I hope you've found a solution to this since your last post or if not, hopefully someone has a solution that I am unaware of. Could you please test this again. Save the Flow in the Dev environment to get the new runtime. and try to export and import. Make sure to publish the changes. When the flow is in a managed solution it doesn't work. I tried yesterday before posting here. I deleted a send email v2 action and replaced it with a send from shared mailbox action. I saved the flow published the managed solution and imported it to production. The changes were not reflected in the production managed solution flow. I tired both options of update and overwrite. I had to delete the solution in production and reimport to see the changes. I was only able to address a similar issue after following the step mentioned here: Solved: Re: App is not updated after I move the solution t... - Power Platform Community (microsoft.... to remove the 'unmanaged' layer in a child flow. My solution more than a couple of child flows and connections, but it appears that only one had the problematic 'active' layer listed. The first Microsoft-sponsored Power Platform Conference is coming in September. 100+ speakers, 150+ sessions, and what's new and next for Power Platform. Announcing a new way to share your feedback with the Power Automate Team. Learn to digitize and optimize business processes and connect all your applications to share data in real time. Join Priya Kodukula and the licensing team, super users and MVPs to find answers to your questions on Power Automate licensing.
OPCFW_CODE
Recommendation To Be taught At Work You Will not Discover Elsewhere Steady studying performs a basic half within the long-lasting success of the world’s most influential corporations. Take Amazon, Apple, or Pixar for instance. These corporations are studying machines—their cultures are constructed on a basis of studying. Studying is what helps workers attain their full potential; it’s what retains them motivated, engaged, and impressed by their very own upward trajectory. In case you haven’t heard of LIFOW, right here’s the rundown: studying within the stream of labor is an idea born from each the need for steady studying and the problem of doing so in demanding and fast-paced work environments. Expertise has propelled studying at work far past the CD-ROM-based coaching and primitive Studying Administration Methods of the previous. LIFOW right now means harnessing the powers of studying software program and collaborating with colleagues to search out prompt assist, retain related data, and change into extra educated and succesful workers. Workers are already self-motivated to be taught. Those that are proactive with their very own studying profit from elevated productiveness, decrease stress, and real satisfaction with the method and outcomes of efficiently studying within the stream of labor. Studying turns into half and parcel of labor itself with accessible tech designed for studying and collaboration amongst groups to grasp related expertise. That is the important thing to a wholesome and sustainable studying tradition. Maintain studying to search out 3 items of recommendation for studying within the stream of labor that you just may not discover elsewhere. LIFOW: 3 Intelligent Strategies 1. Handle Vitality, Not Time You’ve possible heard of the Pomodoro method, a time administration technique for parsing work into intervals, historically 25 minutes in size, separated by quick breaks. The Pomodoro method is usually a useful method to construction your studying classes, but it surely’s not the be-all, end-all of time administration. It’s extra necessary to handle power than it’s to handle time. Time and power are finite assets, however power is what lets you use time successfully. You possibly can all the time discover further pockets of time in your day, however when you’re mentally and bodily exhausted, studying might be troublesome, if not downright inconceivable. The subsequent time somebody commits to a studying initiative, as a substitute of setting a timer for 25 minutes, have them decide to studying one thing after which take a break after they begin to really feel their power waning. This could possibly be after 10 minutes; it could possibly be after two hours. The objective is to push simply to the purpose of diminishing returns, after which take a break earlier than power ranges dip too low. This studying technique helps workers be extra productive in shorter bursts and stop burnout. Practices like these make it very clear to see the place individuals really feel energized, and extra importantly, the place they don’t. Encourage your groups to watch the ebb and stream of their very own power all through their workday, workweek, and over the course of long-term initiatives. Have candid conversations about what gave them power and what took power away, and also you’ll be capable to assist them map their power over these timescales. Then, collectively, you can begin to rebuild their workflows round a job and cadence that maximizes their power, as a substitute of attempting to fill each second of their time on the clock. Optimizing for time is a one-way ticket to frustration, stress, and burnout. Optimize for power to make studying part of everybody’s every day routine. 2. Educate What You Don’t Know The most effective methods to be taught one thing is to show it to another person. Trainer coaching packages usually use this methodology, but it surely’s simply as efficient for anybody seeking to be taught one thing new. While you’re educating, you’re compelled to prepare your ideas and clarify issues in a manner that’s simple to know. This course of helps you determine gaps in your individual understanding and information, which you’ll then fill by doing further analysis or studying by yourself. This studying technique forces you to confront imposter syndrome head-on. In case you’re apprehensive about not being “certified” to show another person one thing, that’s an indication that you should be taught extra concerning the matter your self. However educating itself may help expose these gaps in your information when you’re courageous sufficient to be corrected. There may be somebody in your organization who has the precise information another person wants. Create alternatives for newcomers to share their working information of a enterprise perform together with your in-house consultants. Worst case: the consultants present the instructor the errors of their methods, and everybody advantages from the train. Greatest case: the newcomer teaches the consultants one thing new or revolutionary, and your organization takes an amazing leap ahead. Consultants can educate newbies, and consultants can educate one another, however take into consideration how your latest and entry-level workers may be capable to shed new gentle on matters your veteran workforce members might need ignored. In case you bake human connection and peer-to-peer educating into your studying initiatives, your groups can carry out higher than ever earlier than. 3. Particular person And Cohort Studying At Curious Lion, we champion cohort studying experiences as a extremely efficient means of remodeling groups into studying machines. However inside the on a regular basis grind of labor, typically it’s infeasible to have a breakout session, group assembly, or cohort-based course to be taught what’s wanted to get issues executed. Google is commonly your finest good friend. It’s necessary to search out alternatives for deep exploration of what’s fascinating and related to you personally. To solidify that information, although, it’s smart to share with a colleague, current findings to your workforce whenever you discover a answer or one thing outstanding, and/or use manufactured dialogue areas in order that the learner received’t neglect. Doing so creates studying leaders out of workers; they’ll showcase their means to be taught on a person degree after which convey everybody up a degree by sharing their in-the-flow studying and serving to others maintain on to new data for the long run. Cohorts assist to search out the others alongside the identical studying journey. They’re glorious for sustaining accountability, synthesizing studying by bouncing concepts off each other, connecting deeply on emotional ranges with groups, and realizing that studying has been occurring all of the whereas even when it didn’t really feel prefer it when individuals have been researching and finding out all by their lonesome. Studying within the stream of labor is crucial for corporations to maintain up with the ever-changing calls for of the trendy office and change into studying machines. You’ll set your self and your workforce up for lasting success by managing power and never time, educating what you don’t know but, and harnessing the ability of particular person and cohort studying. What are your ideas on studying within the stream of labor? Begin a dialogue together with your colleagues, mates, or household about it. And when you’re searching for extra recommendations on creating a robust office tradition, take a look at our weblog. We write about these items on a regular basis. Initially revealed at curiouslionlearning.com.
OPCFW_CODE
A couple of days ago, Microsoft and other companies recommended that people work from home (if they can) due to the Corona disease (COVID-19). Since I am part of a remote team, I work mostly from home when I am not traveling, and so let me share my home office setup 2020 with you. I did share my home office setup already in 2018 after we just moved. Since then, I have upgraded my home office with a couple of new things, which I believe make working from home even more productive and enjoyable. This is it, this is my Home Office Setup in 2020 Here is a quick view at my desk setup: - I am pretty sure that the most essential piece of every desk setup is the desk. The desk I got from muuv.ch made out of bamboo, and it lets me convert it to a standing desk, and It also has some cool features for cable management. - My Main machine today ist the 15-inch Surface Laptop 3 attached to a Dell curved-ultrawide monitor (Dell UltraSharp 38 Monitor – U3818DW). To make out the most of an ultra-widescreen, I recommend that you check out the FancyZones feature in the PowerToys. - As my secondary devices, I also use a Surface Pro X, which I mostly use on the road when I need a real mobile work machine. I also have a couple of other devices from the company, which I use to access so some of the corporate resources or run some test builds. The new Surface Pro X is also convenient when I used the Surface Pen to draw something quickly or as a timer for webinars and webcasts. - I am using a lot of Surface accessories like the Surface Precision Mouse (which is by far my favorite mouse), the Surface Pen, the Surface Dial, and the Microsoft Modern Keyboard with FringerprintID and Windows Hello support. - The clock under my screen is a LaMetric Time smart clock, which is internet-connected and allows you to install a couple of different apps. - For my audio and video setup, I am using the super comfortable Surface Headphones, a Blue Yeti Microphone, and a Logitech BRIO webcam. - I also added a couple of IKEA shelves to get some additional storage. - For the light, I am using a Philips Hue. I am a massive fan of the LED light stripes, which allow me to added some indirect light in the room. Hue not just lets me change the color, it also allows me to change the light temperature based on the time of the day. Networking and lab: A couple of days ago, I upgraded my networking gear from an AmpliFi home router and access point to an UniFi Dream Machine from Ubiquiti. The reasons for that are that the Dream Machines gives you a couple of more advanced features and security options, but still in a simple and elegant looking box that you can keep in the living room. As you know, I work a lot of Azure, Windows Server, Hyper-V, and Azure Hybrid services. To make this all work, I need a small on-prem server is necessary. A while ago, I decided to build a Windows Server lab using an Intel NUC. The great thing about the Intel NUC is that it can run up to 64GB of memory, and most of the time, you can’t hear any fans, except for patch-day ;). With that, you had a sneak peek of my home office setup in 2020, how does yours look like? Feel free to respond on Twitter or here in the blog comments with some pictures of your home office setup.2018, Desk, Desk Setup, Dream Machine, Home, Home Office, Home Office Setup, Microsoft, Office, PowerToys, setup, Surface, UniFi, Work, Working Last modified: March 15, 2020
OPCFW_CODE
import Base from './../../Base'; import GameboardConfig from './../../Config/GameboardConfig'; import GameOverWindow from './../Windows/GameOverWindow'; import WinWindow from './../Windows/WinWindow'; import PauseWindow from './../Windows/PauseWindow'; import GameboardState from './../../Models/GameboardState'; import { ColorSettings } from './../../Config/Config'; import Grid from './Grid'; export default abstract class GameboardUI extends Base { protected gameboardConfig: GameboardConfig; protected header: Phaser.Text; protected points: number; protected timer: Phaser.Timer; protected timerMessage: Phaser.Text; protected pausedWindow: PauseWindow; constructor(gameboardConfig: GameboardConfig) { super(); this.gameboardConfig = gameboardConfig; } drawBackground() { this.tools.graphic.addBackground(); let backId = this.gameboardConfig.mainTile.power.backgroundId; return this.tools.sprite.createBackground(backId); } create(timer: Phaser.Timer, pauseCallback: any) { this.points = 0; this.timer = timer; this.addHeader(); this.addMenuButton(pauseCallback); this.addTimer(); } changeTimerColor(color) { if (this.timerMessage) { this.timerMessage.tint = color; } } showMessage( message: string, size: number, color = ColorSettings.TEXT, delay = 1500 ) { let text = this.tools.text.makeXBounded( 650, message, size, 'center', color, true ); this.tools.tween.vanishAndDestroy( text, { alpha: 0 }, delay, 'Linear', delay ); } pause(callbackFunction: any) { this.pausedWindow = new PauseWindow( this.gameboardConfig.mainTile, () => callbackFunction(), function() { this.tools.transition.toLoaderConfig('MainMenu', this.gameboardConfig); }.bind(this) ); } unpause() { this.pausedWindow.hideAndDestroy(); } winScreen(nextState: string) { new WinWindow(this.gameboardConfig.mainTile, () => this.tools.transition.toLoaderConfig( nextState, this.gameboardConfig, null, false ) ); } gameOverScreen(gameState: GameboardState) { new GameOverWindow( this.gameboardConfig.mainTile, () => this.tools.transition.restartState(this.gameboardConfig), () => this.tools.transition.toLoaderConfig('MainMenu', this.gameboardConfig) ); } protected addMenuButton(callbackFunction: any) { let menu = this.tools.sprite.createSprite(840, 30, 'menu', 0.8); menu.inputEnabled = true; menu.events.onInputDown.add(() => callbackFunction()); this.tools.tween.appear(menu); } protected addHeader() { this.header = this.tools.text.make(20, 40, '', 50); this.tools.tween.appear(this.header); this.updateHeader(); } protected updateHeader() { this.header.setText(`Score: ${this.points}`); } protected addTimer() { this.timerMessage = this.tools.text.make(20, 100, 'Time: 00:00', 50); this.tools.tween.appear(this.timerMessage).onComplete.addOnce( function() { this.timer.start(); }.bind(this) ); } protected updateTimer() { if (this.timer) { let min = Math.floor(this.timer.seconds / 60); let sec = Math.floor(this.timer.seconds - min * 60); this.timerMessage.setText(`Time: ${this.num(min)}:${this.num(sec)}`); } } private num(n) { return n > 9 ? '' + n : '0' + n; } update(grid: Grid) { this.points = grid.points; this.updateHeader(); this.updateTimer(); }}
STACK_EDU
I drilled down in the ArduinoCore-avr/libraries/Wire at master · arduino/ArduinoCore-avr · GitHub repo to twi.c, and then clicked on 'pulls' to see if there were any pull requests for the while() loop fixes. However, the 'Pulls' link takes me to Pull requests · arduino/ArduinoCore-avr · GitHub, which AFAICT has nothing to do with twi.c. That page shows you all pull requests to the repository. There's no way to see only pull requests for a specific file. In this case, considering there are only 13 pull requests total, it's easy enough to just scan through the whole list. It is completely unbelievable to me that there haven't been numerous attempts to fix this in the past, but I can't find anything- am I missing something? As I said before. Arduino AVR Boards used to be part of the Arduino IDE repository. It was only recently moved to its own repository. So you need to search the pull requests in the Arduino IDE repository also: I reviewed some of the 'avr' pull requests, but realized I don't know how to interpret them. Some appear to be very old 10 months old at most. If you think that's old, wait until you look at the ones in the Arduino IDE repository! and appear to have never been reviewed, let alone merged into the master repo. Not every pull request will be merged. There are only a few Arduino employees with the power to merge pull requests and they have a lot of work on their plate. Unfortunately Arduino has recently been adding a lot of new projects of questionable value like cloud based closed source tools. These take up the developers time instead of fixing known problems and accepting improvements to the existing software. At the same time, there is always progress, even if it's slower than we would like. Contributions from the community are valuable. Sometimes a PR or bug report will sit, seemingly forgotten for years, then out of the blue it's resolved. I have submitted obvious, non-controversial, pull requests that sit in Arduino repositories for long periods of time but I've also had pull requests merged. Even pull requests that aren't merged can be valuable because other users may find them and use them as a reference for their own code. I saw only one 'closed' request, but can't figure out whether this meant the pull request was rejected or accepted/merged. You can see this at a glance from the little icon on the left of the pull request listing. The green icon means it's open and unmerged. The purple icon means it was merged and closed. The red icon means it was closed and unmerged. Unmerged closed pull requests don't always mean the PR was completely rejected. Sometimes a different approach to achieve the same goal is considered preferable but maybe the original proposal got the conversation started. I'll be happy to fork the avr repo and submit a pull request, as soon as I figure out how to do it ;-). It's a very good skill to learn. Once you learn how to submit pull requests it makes it so easy to contribute to open source software projects. I really like that I can actually do something so direct about bugs I notice in software. I can now submit a fix for a minor issue in only a couple minutes. There is a lot of tutorials for how to do this. The basic outline is: Fork the repository. This creates an online copy of the repository that you own. Clone your fork. This makes a copy of the repository on your computer so you can edit it locally. GitHub does allow you to edit files via the web interface but this is pretty limiting. For example, you can only edit a single file per commit. It's ok for something minor like fixing a typo but for more significant work it's really better to work on local clone of the repository. The Git software is used to work with repositories. If you like, you can just use Git directly from the command line. You can also use a Git client software that provides a GUI for common Git operations. For a beginner, I recommend GitHub Desktop. It's pretty nice but also easy to use. Once you get more experienced and find GitHub Desktop doesn't have the advanced features you need you might decide to move to a different client or just start using Git from the command line. I now use Git Extensions client as well as Git directly but I got started with GitHub Desktop and really liked the interface. Make a branch for your proposed changes. This is useful because you can only submit one pull request per branch so if you want to submit multiple pull requests you just need to make multiple branches. Each will be branched from the master branch, which you keep in sync with the parent repository. Make the desired changes to the files. Commit your changes to the repository. Make sure to add a descriptive commit message. Push your commit from the local clone to your fork on GitHub. Submit a pull request from the branch of your fork to the parent repository. It seems a bit complicated at first but if you just jump in and start playing around it makes sense pretty fast. GitHub and GitHub Desktop makes everything as easy as possible. If you have any questions just let me know.
OPCFW_CODE
The New Economics of Mid-Market Computing The new economic reality of IT in the mid-market is that customers need systems that can scale easily, are simpler to manage and can lower their total cost of computing. Sun & Intel Solutions for small & medium sized businesses ... Musing of an IT professional, once a remote office admin for a highly political law firm... now on his own out in the real world. All topics will be covered, experiences, etc. Come back to find out how I manage to stay sane (or at least attempting to do so.) Little nuggets of trouble I find in my day-to-day work with Microsoft products. Windows Tips & Tricks Shortcuts, resources, help, tips, and tricks for Windows. The Programming Lifecycle with Microsoft .NET & More I will share and discuss real-time software development insight, experiences, and solutions while working on C#.NET, VB.NET, ASP.NET, XML, SQL Server and other related technologies. We'll exchange ideas on the various new technologies that Microsoft is releasing and other topics as they arise. I may even discuss current news if technology related. Life in a Mid-American IT Department The experiences of a Citrix administrator in Mid-Missouri, focusing on XenApp and XenDesktop. Windows Server 2008 The Windows Server 2008 group is for the discussion of issues that arise during the implementation, configuration, and daily use of the Microsoft Windows Server 2008 family of products, including development challenges, bugs, and end-user issues. The Microsoft Dynamicsgroup is your premier resource for objective technical discussion and peer-to-peer support on Microsoft's Dynamics CRM solution. Microsoft ISA Server The Microsoft ISA Server group is your premier resource for objective technical discussion and peer-to-peer support on Microsoft's ISA Server product line. The Windows Server group is your premier resource for objective technical discussion and peer-to-peer support on the Microsoft Windows Server family of products including Windows Server 2003, IIS Server, ISA Server, and SMS Server. The Microsoft IIS group is your premier resource for objective technical discussion and peer-to-peer support on the Microsoft Internet Information Server (IIS) product. Microsoft Exchange Server The Microsoft Exchange group is your premier resource for objective technical discussion and peer-to-peer support on Microsoft's Exchange software. The Microsoft SMS group is your premier resource for objective technical discussion and peer-to-peer support on the Microsoft Systems Management Server solution. Windows Server 2003 Add-Ons, Part 1 Windows Server 2003 is only a year old, but already there are dozens of ways you can power it up and make it easier to manage. In this first article of a multi-part series, Mitch Tulloch shows you how to get the most out of Windows Server 2003 with three feature packs. Windows Server Hacks: Disable "Run As" The "Run As" command is a great tool for network administrators. But in the hands of ordinary users it can be dangerous. Mitch Tulloch, author of Windows Server Hacks, shows you how to disable it for users so it can't do harm. Optimizing Your Servers' Pagefile Performance If you want to wring the most out of your servers' performance, you need to go beyond the default pagefile setting. Windows Server Hacks: Remotely Enable Remote Desktop What to do when you need to enable Remote Desktop on a remote server? This article has the answer. Windows Server Hacks: Creating a Shortcut for Searching Active Directory Active Directory lets administrators publish information resources on their networks. But how can users find those resources? Windows Server Hacks: Configuring Universal Group Caching Universal groups offer big benefits for system administrators, but can have downsides as well. This article shows you how to get the most out of them, and how to avoid the pitfalls. Windows Server Hacks: Transferring Ownership of Files Taking and giving ownership of files is trickier than you might think. This article shows you the best ways to handling transferring ownership. Upgrading and Migrating Print Servers Forget the "paperless office"; printers are still corporate workhorses. That means upgrading and migrating print servers is more important than ever. Windows Server Hacks: Creating a Password Reset Disk Losing a password for an account can be anything from a pain to a disaster. This article shows you how to solve the problem by creating a password recovery disk. Windows Server Hacks: Using Preconfigured User Profiles Roaming profiles make life easier for both users and system administrators. This article shows you how to preconfigure roaming profiles to make them even more effective. How Viable are Windows Servers as Alternatives to Mainframes? The goal of this article is to provide pointers in helping you make conclusions on whether there really is a viable Windows alternative out there for your organization. Oracle 10g -- Manually Create A Physical Standby Database Using Data Guard: Step-by-step instructions on how to Create a Physical Standby Database on Unix and Windows Servers This article provides step-by-step instructions on how to create a physical standby database on Unix and Windows servers. Win XP Pro as a Server Can Windows XP Pro be installed as a server? Slow RDP Connections to WIN2003 Terminal Server 64bit When our users use a thin client to connect to our WIN2003 64bit terminal server it takes a long time to connect and apply desktop settings. The problem only occurs when we use RDP to login to the Terminal server; we can login right away when you login using the console. We don't use a proxy server
OPCFW_CODE
Object Spy is an option or utility within UFT to add objects to the Object Repository. Object Spy can be accessed from the tool bar as shown below: After spying the object, the object hierarchy will be shown. Let us say we are spying the search text box in ‘http://www.amazon.com‘. The object properties will be shown as below After spying an object, click the ‘Highlight‘ option to highlight the object in the application. For adding the object into the Object Repository, click the ‘Add Objects‘ button in the Object Spy dialog. The properties and its values are displayed for the selected object in the dialog box, which is unique for UFT to recognize the objects while the script executes. The supported operations on the object can be retrieved by clicking the operation tab. Operations such as ‘Click‘ for a button, ‘Set‘ for a text box are retrieved from the ‘Operations‘ tab as shown below: Properties of an Object can be determined by other ways apart from Object Spy. They are explained below: GetROProperty is an in-built method used to retrieve runtime value of an object’s property. Below are the steps followed: 1. Record the Object in the OR. 2. Use Object Spy to determine the runtime Object Property to use. 3. Use GetROProperty method to retrieve the identified runtime property and store the value in a variable. 4. Use this value for further deductions. Refer below example code: 'Get value displayed in search text box and assign to variable searchValue = Browser("CreationTime:=0").Page("name:=Amazon.com: Online Shopping").WebEdit("name:=field-keywords","html tag:=INPUT").GetROProperty("innertext") The SetTOProperty changes the value of a test object property. Changing the property doesn’t affect the Object Repository or Active Screen, but just the way UFT identifies the object during runtime. Any changes you make using the SetTOProperty method applies only during the course of the run session, and do not affect the values stored in the test object repository. 'Code example to set Search text box object value during runtime Set obj = Browser("name:=Amazon.com.*").Page(“name:=Amazon.com.*”).WebEdit("name:=field-keywords") ‘Would retrieve the object html id from the test object description, whether it’s in the OR or DP defined ‘Now we set the name property obj.SetTOProperty “name”, “Books” ‘And retrieve it msgbox obj.GetROProperty(“name”) The GetTOProperty returns the value of the property from the test object’s description, i.e., the value used by UFT to identify the object. It returns the value of a property for a test object which UFT recorded to identify during the recording (run time). 'Code example to get Search text box object property value that was set during scripting/recording Set obj = Browser("name:=Amazon.com.*").Page(“name:=Amazon.com.*”).WebEdit("name:=field-keywords") msgbox obj.GetTOProperty(“html id”)
OPCFW_CODE
How to work with the file manager File manager is a repository for images, documents, audio recordings, and videos that you upload and send to users via SendPulse. Each service (bulk email service, website builder, and online course builder) has its own storage folder in the respective section. Let's take a look at how the storage size is calculated and how to work with files. The capacity of your file storage is not defined by the services themselves but is set based on the largest storage capacity allowed by all services’ pricing plans. For example, the email service’s Pro pricing plan offers a 200 MB storage capacity, the website builder’s Standard pricing plan offers a 100 MB storage capacity, and the online course builder’s pricing plan offers 1 GB. The largest storage capacity of all these services is 1 GB, which means that you can use 1 GB of storage space in all three services. Working with your storage In the file manager, you can upload, rename, download, and delete files and create folders. View folder contents To open the file manager, go to the File Manager tab. The created folders’ structure is displayed on the right. The / folder is the root folder of all services. To go to a subfolder, expand the main folder, and click on the desired folder. The files and folders selected on the left are displayed on the right. Use the search box to quickly find a folder or file by its name. You can display files as a list or gallery. Gallery view is useful when you want to see file previews. The list shows additional information such as file size and date added. It also shows folders’ total file size and the date the last file was added. Create a folder To create a new folder, navigate to the directory where you want to create it, and click Create Folder. Check the path where the folder will be created, enter the name of your folder, and click Create. To upload a file, navigate to the directory where you want to upload it, and click Upload File. Check the path where the file will be uploaded. Drag and drop your files, or click on the upload area to select them from your computer. You can upload files of any format and use them in the lesson builder for the Image, Video, Audio, and File elements. However, you can only use pictures in JPG, JPEG, PNG, BMP, GIF, SVG, and WEBP formats for your course cover and certificate. There is no individual file size limit, only the size of your available storage space is taken into account. Click Upload, and wait for the file to be uploaded. You can then close the window. You can also upload your files directly to the necessary folder in the lesson builder and then manage them in the repository. Edit files and folders To rename a folder, right-click on it, and select Edit in the context menu. To rename a file, right-click on it, and select Edit in the context menu. Delete files and folders To delete a folder, right-click on it, and select Delete in the context menu. To delete a file, right-click on it, and select Delete in the context menu. To delete multiple files, select them, and click Delete. Download a file You cannot open and view a file on the page in the file manager. You can download it and then view it in more detail. To download a file, right-click on it, and select Download in the context menu. Use files to create your course In the lesson builder, you can select your uploaded files in all formats when adding builder elements. Please note that for the files to be displayed properly, you need to upload them using their respective elements. Upload videos to the Video element, include images in the Gallery: Image element, add audio recordings to the Audio element, and upload files in other formats to the File element. In the course settings, you can choose an image for the course preview in the student's account and a cover image for your course website. In the certificate settings, you can select a background and additional image. Last Updated: 21.02.2024
OPCFW_CODE
<?php namespace Vaites\ApacheTika\Tests; use Vaites\ApacheTika\Client; /** * Tests for web mode */ class WebTest extends BaseTest { protected static $process = null; /** * Start Tika server and create shared instance of clients */ public static function setUpBeforeClass(): void { self::$client = Client::make('localhost', 9998, [CURLOPT_TIMEOUT => 30]); } /** * OCR language test */ public function testHttpHeader(): void { $client = Client::make('localhost', 9998)->setHeader('Foo', 'bar'); $this->assertEquals('bar', $client->getHeader('foo')); } /** * OCR language test */ public function testOCRLanguage(): void { $client = Client::make('localhost', 9998)->setOCRLanguage('spa'); $this->assertEquals(['spa'], $client->getOCRLanguages()); } /** * OCR languages test */ public function testOCRLanguages(): void { $client = Client::make('localhost', 9998)->setOCRLanguages(['fra', 'spa']); $this->assertEquals(['fra', 'spa'], $client->getOCRLanguages()); } /** * cURL multiple options test */ public function testCurlOptions(): void { $client = Client::make('localhost', 9998, [CURLOPT_TIMEOUT => 3]); $options = $client->getOptions(); $this->assertEquals(3, $options[CURLOPT_TIMEOUT]); } /** * cURL single option test */ public function testCurlSingleOption(): void { $client = Client::make('localhost', 9998)->setOption(CURLOPT_TIMEOUT, 3); $this->assertEquals(3, $client->getOption(CURLOPT_TIMEOUT)); } /** * cURL timeout option test */ public function testCurlTimeoutOption(): void { $client = Client::make('localhost', 9998)->setTimeout(3); $this->assertEquals(3, $client->getTimeout()); } /** * cURL headers test */ public function testCurlHeaders(): void { $header = 'Content-Type: image/jpeg'; $client = Client::make('localhost', 9998, [CURLOPT_HTTPHEADER => [$header]]); $options = $client->getOptions(); $this->assertContains($header, $options[CURLOPT_HTTPHEADER]); } /** * Set host test */ public function testSetHost(): void { $client = Client::make('localhost', 9998); $client->setHost('127.0.0.1'); $this->assertEquals('127.0.0.1', $client->getHost()); } /** * Set port test */ public function testSetPort(): void { $client = Client::make('localhost', 9998); $client->setPort(9997); $this->assertEquals(9997, $client->getPort()); } /** * Set scheme test */ public function testSetScheme(): void { $client = Client::make('https://localhost', 443, [], false); $this->assertEquals('https', $client->getScheme()); } /** * Set url host test */ public function testSetUrlHost(): void { $client = Client::make('http://localhost:9998'); $this->assertEquals('localhost', $client->getHost()); } /** * Set url port test */ public function testSetUrlPort(): void { $client = Client::make('http://localhost:9998'); $this->assertEquals(9998, $client->getPort()); } /** * Set retries test */ public function testSetRetries(): void { $client = Client::make('localhost', 9998); $client->setRetries(5); $this->assertEquals(5, $client->getRetries()); } /** * Set fetcher name test */ public function testFetcherName(): void { if(version_compare(self::$version, '2.0.0') >= 0) { $client = Client::make('localhost', 9998); $client->setFetcherName('FileSystemFetcher'); $this->assertEquals('FileSystemFetcher', $client->getFetcherName()); } else { $this->markTestSkipped('Apache Tika 1.x doesn\'t have tika-pipes module'); } } /** * Test delayed check */ public function testDelayedCheck(): void { $client = Client::prepare('localhost', 9997); $client->setPort(9998); $this->assertStringContainsString(self::$version, $client->getVersion()); } }
STACK_EDU
using CS.Base.Interface; using webServer.Models; using Microsoft.Extensions.DependencyInjection; using System; using System.Reflection; namespace CS.Base { public abstract class ServiceBase : ITransient { private Lazy<imdbContext> dbContext; protected imdbContext db => dbContext.Value; protected IServiceProvider ServiceProvider { get; } public ServiceBase(IServiceProvider serviceProvider) { ServiceProvider = serviceProvider; dbContext = new Lazy<imdbContext>(() => ServiceProvider.GetService<imdbContext>()); OnCreateProperties(); } protected virtual void OnCreateProperties() { object controller = this; //foreach (PropertyInfo declaredProperty in controller.GetType().GetTypeInfo().DeclaredProperties) //{ // if (declaredProperty.CanWrite) // { // declaredProperty.GetSetMethod(true).Invoke(controller, new object[1] // { // ActivatorUtilities.GetServiceOrCreateInstance(ServiceProvider, declaredProperty.PropertyType) // }); // } //} foreach (var declaredProperty in controller.GetType().GetTypeInfo().DeclaredFields) { declaredProperty.SetValue(controller, ActivatorUtilities.GetServiceOrCreateInstance(ServiceProvider, declaredProperty.FieldType)); } } } }
STACK_EDU
Wanted to start up a discussion to share some ideas & concerns I have about what might be next for suitcss. Would love to get everyone’s input. This came up quite a while ago, and we decided not to pursue it. Lerna makes this pretty damn easy. I think the benefit to development efficiency would be more than worth it. If we proceed, we would have to decide if we wanted to release packages in a independent versioning mode. fixed is simpler and has less complications with interacting packages. (my preference for suit) - The drawback is unexpected major version bumps for end-users (e.g. user is only using email@example.com but a breaking change to components-button bumps all packages to 5.x. However, in that case a user could just simply stay on firstname.lastname@example.org with no harm. Here’s an example of what the conversion process might look like: https://github.com/babel/babel-preset-env Leverage postcss ecosystem Now that our build is fully postcss, there is a lot we could do to streamline development, and also provide a level of extensibility for our users. For example, currently whenever we have a responsive variant of a utility, we just duplicate the code and change selectors. If a user wants to add a new viewport def (e.g. --xlg-viewport), they’re stuck recreating all the code if they want to use it with responsive utilities (e.g. We should be able to leverage postcss more into our build process to make things like this more configurable. Tailwind CSS, for example, has an @responive directive for generating these. media prefixing for components Whether or not it is a common or “suggested” practice, I still feel that we should have media-prefixing for components written into the spec. Previous discussions here and here. Long ago @necolas stated: For components, you can use media queries directly in your component CSS instead. And in general, I’m not really sure that viewport-width should be the concern of UI components. Certain frameworks will allow you to efficiently re-render the component HTML, and I think swapping out HTML is often simpler and more robust. While I agree, I think there are still simple situations where media prefixing for components can make a lot of sense. Consider something like this, where you might want to render a gutter on <div class="Grid md-Grid--withGutter lg-Grid--withGutter">…</div> NPM Scoped packages Would be nice to release things on npm with scoped packages (e.g. I’m wondering if we should deprecate the preprocessor in favor or using just a postcss and a suitcss-postcss plugin directly. It seems like it’s really only used internally anyway (correct me if I’m wrong). Wondering if it would be a good idea to have a community-managed suitcss-contrib org for people to source “vetted” community components and utils. Along these lines, I think it would be nice to somehow provide more guidance to the community for common problems everyone needs to solve. Currently , I think everyone kind of figures out how to manage things like typography components, forms, spacing, etc. on their own. While I don’t want to provide a kitchen-sink, many of these seem so common that guidance would be helpful.
OPCFW_CODE
using System; using System.Collections; using System.Collections.Generic; using System.ComponentModel.DataAnnotations.Schema; using System.Linq; using System.Linq.Expressions; using System.Reflection; namespace Oxygen.CommonTool { /// <summary> /// 基于表达式树的类型转换扩展 /// </summary> /// <typeparam name="TSource"></typeparam> /// <typeparam name="TTarget"></typeparam> public static class Mapper<TSource, TTarget> where TSource : class where TTarget : class { private static Func<TSource, TTarget> MapFunc { get; set; } private static Action<TSource, TTarget> MapAction { get; set; } /// <summary> /// 将对象TSource转换为TTarget /// </summary> /// <param name="source"></param> /// <returns></returns> public static TTarget Map(TSource source) { if (MapFunc == null) MapFunc = GetMapFunc(); return MapFunc(source); } public static List<TTarget> MapList(IEnumerable<TSource> sources) { if (MapFunc == null) MapFunc = GetMapFunc(); return sources.Select(MapFunc).ToList(); } /// <summary> /// 将对象TSource的值赋给给TTarget /// </summary> /// <param name="source"></param> /// <param name="target"></param> public static void Map(TSource source, TTarget target) { if (MapAction == null) MapAction = GetMapAction(); MapAction(source, target); } private static Func<TSource, TTarget> GetMapFunc() { var sourceType = typeof(TSource); var targetType = typeof(TTarget); if (IsEnumerable(sourceType) || IsEnumerable(targetType)) throw new NotSupportedException("Enumerable types are not supported,please use MapList method."); //Func委托传入变量 var parameter = Expression.Parameter(sourceType, "p"); var memberBindings = new List<MemberBinding>(); var targetTypes = targetType.GetProperties().Where(x => x.PropertyType.IsPublic && x.CanWrite); foreach (var targetItem in targetTypes) { var sourceItem = sourceType.GetProperty(targetItem.Name); //判断实体的读写权限 if (sourceItem == null || !sourceItem.CanRead || sourceItem.PropertyType.IsNotPublic) continue; //标注NotMapped特性的属性忽略转换 if (sourceItem.GetCustomAttribute<NotMappedAttribute>() != null) continue; var sourceProperty = Expression.Property(parameter, sourceItem); //当非值类型且类型不相同时 if (!sourceItem.PropertyType.IsValueType && sourceItem.PropertyType != targetItem.PropertyType) { //判断都是(非泛型、非数组)class if (sourceItem.PropertyType.IsClass && targetItem.PropertyType.IsClass && !sourceItem.PropertyType.IsArray && !targetItem.PropertyType.IsArray && !sourceItem.PropertyType.IsGenericType && !targetItem.PropertyType.IsGenericType) { var expression = GetClassExpression(sourceProperty, sourceItem.PropertyType, targetItem.PropertyType); memberBindings.Add(Expression.Bind(targetItem, expression)); } //集合数组类型的转换 if (typeof(IEnumerable).IsAssignableFrom(sourceItem.PropertyType) && typeof(IEnumerable).IsAssignableFrom(targetItem.PropertyType)) { var expression = GetListExpression(sourceProperty, sourceItem.PropertyType, targetItem.PropertyType); memberBindings.Add(Expression.Bind(targetItem, expression)); } continue; } //可空类型转换到非可空类型,当可空类型值为null时,用默认值赋给目标属性;不为null就直接转换 if (IsNullableType(sourceItem.PropertyType) && !IsNullableType(targetItem.PropertyType)) { var hasValueExpression = Expression.Equal(Expression.Property(sourceProperty, "HasValue"), Expression.Constant(true)); var conditionItem = Expression.Condition(hasValueExpression, Expression.Convert(sourceProperty, targetItem.PropertyType), Expression.Default(targetItem.PropertyType)); memberBindings.Add(Expression.Bind(targetItem, conditionItem)); continue; } //非可空类型转换到可空类型,直接转换 if (!IsNullableType(sourceItem.PropertyType) && IsNullableType(targetItem.PropertyType)) { var memberExpression = Expression.Convert(sourceProperty, targetItem.PropertyType); memberBindings.Add(Expression.Bind(targetItem, memberExpression)); continue; } if (targetItem.PropertyType != sourceItem.PropertyType) continue; memberBindings.Add(Expression.Bind(targetItem, sourceProperty)); } //创建一个if条件表达式 var test = Expression.NotEqual(parameter, Expression.Constant(null, sourceType));// p==null; var ifTrue = Expression.MemberInit(Expression.New(targetType), memberBindings); var condition = Expression.Condition(test, ifTrue, Expression.Constant(null, targetType)); var lambda = Expression.Lambda<Func<TSource, TTarget>>(condition, parameter); return lambda.Compile(); } /// <summary> /// 类型是clas时赋值 /// </summary> /// <param name="sourceProperty"></param> /// <param name="targetProperty"></param> /// <param name="sourceType"></param> /// <param name="targetType"></param> /// <returns></returns> private static Expression GetClassExpression(Expression sourceProperty, Type sourceType, Type targetType) { //条件p.Item!=null var testItem = Expression.NotEqual(sourceProperty, Expression.Constant(null, sourceType)); //构造回调 Mapper<TSource, TTarget>.Map() var mapperType = typeof(Mapper<,>).MakeGenericType(sourceType, targetType); var iftrue = Expression.Call(mapperType.GetMethod(nameof(Map), new[] { sourceType }), sourceProperty); var conditionItem = Expression.Condition(testItem, iftrue, Expression.Constant(null, targetType)); return conditionItem; } /// <summary> /// 类型为集合时赋值 /// </summary> /// <param name="sourceProperty"></param> /// <param name="targetProperty"></param> /// <param name="sourceType"></param> /// <param name="targetType"></param> /// <returns></returns> private static Expression GetListExpression(Expression sourceProperty, Type sourceType, Type targetType) { //条件p.Item!=null var testItem = Expression.NotEqual(sourceProperty, Expression.Constant(null, sourceType)); //构造回调 Mapper<TSource, TTarget>.MapList() var sourceArg = sourceType.IsArray ? sourceType.GetElementType() : sourceType.GetGenericArguments()[0]; var targetArg = targetType.IsArray ? targetType.GetElementType() : targetType.GetGenericArguments()[0]; var mapperType = typeof(Mapper<,>).MakeGenericType(sourceArg, targetArg); var mapperExecMap = Expression.Call(mapperType.GetMethod(nameof(MapList), new[] { sourceType }), sourceProperty); Expression iftrue; if (targetType == mapperExecMap.Type) { iftrue = mapperExecMap; } else if (targetType.IsArray)//数组类型调用ToArray()方法 { iftrue = Expression.Call(typeof(Enumerable), nameof(Enumerable.ToArray), new[] { mapperExecMap.Type.GenericTypeArguments[0] }, mapperExecMap); } else if (typeof(IDictionary).IsAssignableFrom(targetType)) { iftrue = Expression.Constant(null, targetType);//字典类型不转换 } else { iftrue = Expression.Convert(mapperExecMap, targetType); } var conditionItem = Expression.Condition(testItem, iftrue, Expression.Constant(null, targetType)); return conditionItem; } private static Action<TSource, TTarget> GetMapAction() { var sourceType = typeof(TSource); var targetType = typeof(TTarget); if (IsEnumerable(sourceType) || IsEnumerable(targetType)) throw new NotSupportedException("Enumerable types are not supported,please use MapList method."); //Func委托传入变量 var sourceParameter = Expression.Parameter(sourceType, "p"); var targetParameter = Expression.Parameter(targetType, "t"); //创建一个表达式集合 var expressions = new List<Expression>(); var targetTypes = targetType.GetProperties().Where(x => x.PropertyType.IsPublic && x.CanWrite); foreach (var targetItem in targetTypes) { var sourceItem = sourceType.GetProperty(targetItem.Name); //判断实体的读写权限 if (sourceItem == null || !sourceItem.CanRead || sourceItem.PropertyType.IsNotPublic) continue; //标注NotMapped特性的属性忽略转换 if (sourceItem.GetCustomAttribute<NotMappedAttribute>() != null) continue; var sourceProperty = Expression.Property(sourceParameter, sourceItem); var targetProperty = Expression.Property(targetParameter, targetItem); //当非值类型且类型不相同时 if (!sourceItem.PropertyType.IsValueType && sourceItem.PropertyType != targetItem.PropertyType) { //判断都是(非泛型、非数组)class if (sourceItem.PropertyType.IsClass && targetItem.PropertyType.IsClass && !sourceItem.PropertyType.IsArray && !targetItem.PropertyType.IsArray && !sourceItem.PropertyType.IsGenericType && !targetItem.PropertyType.IsGenericType) { var expression = GetClassExpression(sourceProperty, sourceItem.PropertyType, targetItem.PropertyType); expressions.Add(Expression.Assign(targetProperty, expression)); } //集合数组类型的转换 if (typeof(IEnumerable).IsAssignableFrom(sourceItem.PropertyType) && typeof(IEnumerable).IsAssignableFrom(targetItem.PropertyType)) { var expression = GetListExpression(sourceProperty, sourceItem.PropertyType, targetItem.PropertyType); expressions.Add(Expression.Assign(targetProperty, expression)); } continue; } //可空类型转换到非可空类型,当可空类型值为null时,用默认值赋给目标属性;不为null就直接转换 if (IsNullableType(sourceItem.PropertyType) && !IsNullableType(targetItem.PropertyType)) { var hasValueExpression = Expression.Equal(Expression.Property(sourceProperty, "HasValue"), Expression.Constant(true)); var conditionItem = Expression.Condition(hasValueExpression, Expression.Convert(sourceProperty, targetItem.PropertyType), Expression.Default(targetItem.PropertyType)); expressions.Add(Expression.Assign(targetProperty, conditionItem)); continue; } //非可空类型转换到可空类型,直接转换 if (!IsNullableType(sourceItem.PropertyType) && IsNullableType(targetItem.PropertyType)) { var memberExpression = Expression.Convert(sourceProperty, targetItem.PropertyType); expressions.Add(Expression.Assign(targetProperty, memberExpression)); continue; } if (targetItem.PropertyType != sourceItem.PropertyType) continue; expressions.Add(Expression.Assign(targetProperty, sourceProperty)); } //当Target!=null判断source是否为空 var testSource = Expression.NotEqual(sourceParameter, Expression.Constant(null, sourceType)); var ifTrueSource = Expression.Block(expressions); var conditionSource = Expression.IfThen(testSource, ifTrueSource); //判断target是否为空 var testTarget = Expression.NotEqual(targetParameter, Expression.Constant(null, targetType)); var conditionTarget = Expression.IfThen(testTarget, conditionSource); var lambda = Expression.Lambda<Action<TSource, TTarget>>(conditionTarget, sourceParameter, targetParameter); return lambda.Compile(); } private static bool IsNullableType(Type type) { return type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>); } private static bool IsEnumerable(Type type) { return type.IsArray || type.GetInterfaces().Any(x => x == typeof(ICollection) || x == typeof(IEnumerable)); } } }
STACK_EDU
Data retrieval time - Amazon Glacier Amazon Glacier FAQ page contains several points, which talk about time needed to retrieve data from Amazon Glacier. For example: Standard retrievals allow you to access any of your archives within several hours. Standard retrievals typically complete within 3 – 5 hours ... & ... Bulk retrievals typically complete within 5 – 12 hours. Why does it take so long to retrieve data from Amazon Glacier in comparison with other storage classes? Why does it take so long? Because that's how it's designed. Amazon Glacier is specifically designed to be a low-cost low-access storage service for "data archiving and long-term backup." If you want regular immediate access to your data, then you need something like Amazon S3, which is a higher-cost instant-access storage service. Please also note that it's called "Glacier," and glaciers are not known for being fast. I suspect they're using tape drives or something similar, but I can't comment on the specific technical aspects, nor can I find that info on Amazon's web pages. So simply put - the type of volume and its I/O size (which is capped at some specific limit due to underlying technology) are the main causes for lengthy process of data retrieval from Amazon Glacier. Is that correct? @an0o0nym Not quite. As was said, it was build to be reliable, but not fast. So AWS invested in reliability and cut costs related to speed. for example they can have one tape-changer per X tapes instead of 10. when the changer have processed all queue before your request and eventually starting with your, then data transfers quick. It is just my guess, though. I've found this on Glacier Wiki Page: ZDNet says, that according to private e-mail, Glacier runs on "inexpensive commodity hardware components". In 2012, ZDNet quoted a former Amazon employee as saying that Glacier is based on custom low-RPM hard drives attached to custom logic boards where only a percentage of a rack's drives can be spun at full speed at any one time. (Similar technology is also used by Facebook.) There is some belief amongst users that the underlying hardware used for Glacier storage is tape-based, owing to the fact that Amazon has positioned Glacier as a direct competitor to tape backup services (both on-premises and cloud-based). This confusion is exacerbated by the fact that Glacier has archive retrieval delays (3–5 hours before archives are available) similar to that of tape-based systems and a pricing model that discourages frequent data retrieval. The Register claimed that Glacier runs on Spectra T-Finity tape libraries with LTO-6 tapes. Others have conjectured Amazon using off-line shingled magnetic recording hard drives, multi-layer Blu-ray optical discs, or an alternative proprietary storage technology. Amazon Glacier has 2 stages: the retrieval and the download. It was created for long term storage that does not require frequent retrieval; such as cloud backups. Retrieval requests take typically 3 - 5 hours and then the data is placed in a staging area for the customer to download it. Retrieved data is staged for 24 hours, therefore it's important to download the data within that period. Download time depends on your bandwidth. The reason for the lengthy time is that Amazon prices Glacier lower than other storage options, which are intended for more frequent data access. However, Glacier does have different types of data retrieval available. If needed, they do have expedited retrieval available which makes data available at a much faster rate, as soon as 1-5 minutes. This type is more expensive than the standard Glacier data retrieval. AWS has an FAQ with additional details on the various types of retrievals: https://aws.amazon.com/glacier/faqs/. Why did you link to N2WS?
STACK_EXCHANGE
Sol In Noctem I don't use them since i don't speedrun and play rather casually. I like to read stories and be a part of them. In general my recommendations would the the less addiction to addons you have the better. The game should be played like blizzard meant it to. I do know that there is a bunch of essential addons you really need to have one. I am definitely in the do not use camp. Back in the day, I used quest helper addons. At first, I was please with the convenience, but after a while questing just seemed more like a chore than, well, a quest. Instead of listening to an NPCs story and trying to figure out how to help, everything was done for me. It took out all the adventure. After blizzard integrated a quest helper, I would disable it. But it came to a point that some quest text wouldn't give any information about locations, and you were required to use the quest map to find it. IMO, after that WoW stopped being an RPG and just became a G. But to each his own. I just don't want the writing to require a quest helper, since I prefer not to have one. Using websites like Thottbot just like back then for every quest is the way to go. But if I get too lazy then I will just get a quest addon while still reading through and follow along with the stories. I'm sort of a weird split... I'd prefer if there were no addons. However, I will use any and all addons that give me any advantage. So if questie is available, despite the fact that I am familiar with the content, I will use it. I'd probably quit if Blizz implemented LFR (I know they wont) but if someone makes OQueue group finder available (this is an LFG/LFR addon) I will download it immediately and use it begrudgingly. I will complain about this but simultaneously contribute to the problem by using the addon. I won't be using a quest addon, but I will probably use a leveling guide. I don't care much about questing, but I want to avoid the retail experience of easy-mode, run-to-the-map-marker style of doing things. The leveling guide will mostly just be for efficiency, so that I pick up the right quests, at the right time before going into an instance or new zone. I don't want to run back and forth a zillion times or miss picking up important quests/quest-lines. I secretly hope the addon API allows something like Storyline but it's a long shot. I won't use a questhelper type addon though. I'd like to be a purist and not use a mod like that... if I was single and had no aspirations of a job or family. Unfortunately I kinda like my family which needs me to keep the job which forces me to make changes to things I love so I can still fit them in. I will be conceding and using a quest helper to reduce wasted time in-game when I do get to play between responsibilities. <Goblin Rocket Fuel Rats> I'm lucky, the fiance wants to play as well and we don't have kids yet. https://www.wowinterface.com/downloads/ ... assic.html It's working at the moment in the beta We'll have to see what it's like on release. On another note, for those of you who will miss ElvUI, I just tried TukUI and it's very much in the same vein and works flawlessly at the moment. I haven't decided yet. I might, and I might not. Probably won't in the beginning at least.
OPCFW_CODE
The linear regression model lrm the simple or bivariate lrm model is designed to study the relationship between a pair of variables that appear in a data set. Sinharay, in international encyclopedia of education third edition, 2010. Review of multiple regression university of notre dame. When some pre dictors are categorical variables, we call the subsequent. Multiple linear regression model we consider the problem of regression when the study variable depends on more than one explanatory or independent variables, called a multiple linear regression model. How do multiple regression and linear regression differ. Regresion multipleejercicios free download pdf ebook. Handbook of regression analysis samprit chatterjee new york university jeffrey s. Linear regression analysis part 14 of a series on evaluation of scientific publications by astrid schneider, gerhard hommel, and maria blettner summary background. Notes on linear regression analysis duke university. Based on a set of independent variables, we try to predict the dependent variable result. The concepts behind linear regression, fitting a line to data with least squares and rsquared, are pretty darn simple, so lets get down to it. The book also serves as a valuable, robust resource for professionals in the fields of engineering, life and biological sciences, and the social sciences. Orlov chemistry department, oregon state university 1996 introduction in modern science, regression analysis is a necessary part of virtually almost any data reduction process. To see how these tools can benefit you, we recommend you download and install the. A sound understanding of the multiple regression model will help you to understand these other applications. Marill, md abstract the applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Robust statistical modeling using the t distribution pdf. In the wolfram language, linearmodelfit returns an object that contains fitting information for a linear regression model and allows for easy extraction of results and diagnostics. Weve spent a lot of time discussing simple linear regression, but simple linear regression is, well, simple in the sense that there is usually more than one variable that helps explain the variation in the response variable. Multiple linear regression attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data. When there are multiple input variables, literature from statistics often refers to the method as multiple linear regression. This first chapter will cover topics in simple and multiple regression, as well as the. Multiple criteria linear regression pdf free download. Multiple regression, key theory the multiple linear regression model is y x. Multiple linear regression analysis using microsoft excel by michael l. In multiple linear regression, x is a twodimensional array with at least two columns, while y is usually a onedimensional array. In many applications, there is more than one factor that in. This matlab function returns a vector b of coefficient estimates for a multiple linear regression of the responses in vector y on the predictors in matrix x. More recently, alternatives to least squares have also been used, coleman and larsen 1991 and caples et al. Following that, some examples of regression lines, and their interpretation, are given. The dependent variable depends on what independent value you pick. The difference between the equation for linear regression and the equation for multiple regression is that the equation for multiple regression must be able to handle multiple inputs, instead of only the one input of linear regression. Third, multiple regression offers our first glimpse into statistical models that use more than two quantitative. In this video, i will be talking about a parametric regression method called linear regression and its extension for multiple features covariates, multiple regression. Regresion lineal multiple ejercicio resuelto zpnx62pk5ynv. Introduction to linear regression analysis ebook by. It is not part of stata, but you can download it over the internet like this. We can ex ppylicitly control for other factors that affect the dependent variable y. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are. Multiple linear regression and matrix formulation introduction i regression analysis is a statistical technique used to describe relationships among variables. Linear regression in spss a simple example spss tutorials. Practically, we deal with more than just one independent variable and in that case building a linear model using multiple input variables is important to accurately model the system for better prediction. Home regression multiple linear regression tutorials linear regression in spss a simple example a company wants to know how job performance relates to iq, motivation and social support. Lets dive right in and perform a regression analysis using the variables api00. It allows the mean function ey to depend on more than one explanatory variables. This model generalizes the simple linear regression in two ways. Isakson 2001 discusses the pitfalls of using multiple linear regression analysis in real estate appraisal. Using multivariable linear regression technique for. Multiple regression and linear regression do the same task. Models that include interaction effects may also be analyzed by multiple linear regression methods. Multiple linear regression models have been extensively used in education see, e. These features can be taken into consideration for multiple linear regression. Linear regression for machine learning machine learning mastery. A study on multiple linear regression analysis article pdf available in procedia social and behavioral sciences 106. This work is about the multicollinearity problem between the regressive variables in a multiple lineal regression model. Multiple regression generally explains the relationship between multiple independent or predictor variables and one dependent or criterion variable. The independent variable is the one that you use to predict what the other variable is. Multiple regression analysis is more suitable for causal ceteris paribus analysis. Regression analysis is an extremely powerful tool that enables the researcher to learn more about the relationships within the data being studied. Multiple regression models thus describe how a single response variable y depends linearly on a. Therefore, in this article multiple regression analysis is described in detail. Regression analysis in excel how to use regression. The following data gives us the selling price, square footage, number of bedrooms, and age of house in years that have sold in a neighborhood in the past six months. Chapter 305 multiple regression introduction multiple regression analysis refers to a set of techniques for studying the straightline relationships among two or more variables. One of the most common statistical models is the linear regression model. Regression and correlation 346 the independent variable, also called the explanatory variable or predictor variable, is the xvalue in the equation. Least squares fitting is a common type of linear regression that is useful for modeling relationships within data. Multiple linear regression matlab regress mathworks. Scilab documents at can be downloaded at the following site. Learn how to use r to implement linear regression, one of the most common statistical modeling approaches in data science. A linear model predicts the value of a response variable by the linear combination of predictor variables or functions of predictor variables. Regression with stata chapter 1 simple and multiple regression. Understanding multiple regression towards data science. It enables the identification and characterization of relationships among multiple factors. Review of multiple regression page 4 the above formula has several interesting implications, which we will discuss shortly. If you get a small partial coefficient, that could mean that the predictor is not well associated with the dependent variable, or it could be due to the predictor just being highly redundant with one or. Linear regression is a commonly used predictive analysis model. Construct and analyze a linear regression model with interaction effects and interpret the results. Second, multiple regression is an extraordinarily versatile calculation, underlying many widely used statistics methods. This is a simple example of multiple linear regression, and x has exactly two columns. For example, consider the cubic polynomial model which is a multiple linear regression model with three regressor variables. The least squares regression is often used to assess residential property values, ihlanfeldt and martinezvazquez 1986. Therefore, job performance is our criterion or dependent. Linear regression analysis world scientific publishing. Regression is a statistical analysis which is used to predict the outcome of a numerical variable. Multiple linear regression with math and code towards. Regression with sas chapter 1 simple and multiple regression. This module highlights the use of python linear regression, what linear regression is, the line of best fit, and the coefficient of x. At the end, two linear regression models will be built. Polyno mial models will be discussed in more detail in chapter 7. In statistics, linear regression is a linear approach to modeling the relationship between a. In this post you will discover the linear regression algorithm, how it. A dependent variable is modeled as a function of several independent variables with corresponding coefficients, along with the constant term. This has been a guide to regression analysis in excel. The critical assumption of the model is that the conditional mean function is linear. Multiple linear regression so far, we have seen the concept of simple linear regression where a single predictor variable x was used to model the response variable y. Multiple linear regression mlr is a statistical technique that uses several explanatory variables to predict the outcome of a. Multiple regression is the statistical procedure to predict the values of a response. Popular spreadsheet programs, such as quattro pro, microsoft excel. Introduction to linear regression analysis, fifth edition is an excellent book for statistics and engineering courses on regression at the upperundergraduate and graduate levels. This volume presents in detail the fundamental theories of linear regression analysis and diagnosis, as well as the relevant statistical computing techniques so that readers are able to actually model the data using the methods and techniques described in. A function for predicting values from a multiple regression. Show full abstract above three factors, a ternary linear regression model 2 is made. Univariate statistical techniques such as simple linear regression use a single. Simple linear and multiple regression in this tutorial, we will be covering the basics of linear regression, doing both simple and multiple regression models. Barthel, in international encyclopedia of education third edition, 2010. Wage equation if weestimatethe parameters of thismodelusingols, what interpretation can we give to. As you know or will see the information in the anova table has several uses. I the simplest case to examine is one in which a variable y, referred to as the dependent or target variable, may be.96 701 65 486 747 640 1133 867 1277 1490 36 128 842 399 242 549 881 282 1049 1532 703 820 1329 52 871 200 1555 1313 1021 187 168 356 992 1093 53 678 860 1008 824 860 421 633 605
OPCFW_CODE
Today I bought Assassin's Creed 4: Black Flag reinstalled it and connected with Uplay. I’ve launched the game, chose single player, filled out the name of the game file, confirmed and then the game froze. From then on I wasn’t even able to get to the menu (where you choose if you want to play multiplayer or single player, or to do something else). Every time I lunch the game it ether starts on the background and it isn’t responding, or there is a completely black screen with the cursor (blue spinning circle (loading)) and after I press whatever key a popup Windows appeal AC4BFSP.exe is not responding. I’ve already uninstalled the game and installed it again, but I wasn’t able to get to the menu ever again. I always end up with the black green and the loading blue circle. First I had a XBOX controller connected to my laptop (that was the time I was able to get at least to the menu) but after it didn’t work with the controller connected several times I tried to lunch the game with nothing connected into the laptop, but it didn’t work ether. I’ve restarted the computer several times. I bought the game on a disc. How to fix this? Thank you for your advice! Yeah same problem and apparently no fixes or anything as no1 has been able to say how to get it towork tough some said that running as administrator or uninstalling some fire thing works i have none of those problems just a damn black screen... so sad and i understood that sharing is disabled for some days now as well.... I have exactly the same problem, but after two reinstallation I think my PC didn't uninstall it correctly and when I read the Disk1 to install AC it only proposes to play the game, like if it was already on my PC.. I deleted myself all the folders that might block the installation but nothing changed. I'm worried the CD doesn't have unlimited installation.. Please help me, I'm not very good with computers and I don't know what to do. I would like at least to have the game on my PC and I will check later for the black screen problem... If a lot of people have this problem it will be fixed quickly, I think. (A solution was to download the game from the ubisoft launcher but I have a very slow Internet speed and it says it'll take 48h to downlaod all ! so much fun ! ) (sorry for my english, it's not my mother language) I was wondering why - with the exception of AC1 - no Assassins Creed game works on my Windows 8.1 PC. AC3 ran fine in January on Windows 8.0, but since 8.1 (complete reinstall of the OS) I haven't had one gameing minute of Assassins Creed THe only thing I get is: Faulting Application Path: D:\Steam\steamapps\common\Assassin's Creed IV Black Flag\AC4BFSP.exe Problem Event Name: APPCRASH Application Name: AC4BFSP.exe Application Version: 0.0.0.0 Application Timestamp: 5285f99f Fault Module Name: ntdll.dll Fault Module Version: 6.3.9600.16408 Fault Module Timestamp: 523d45fa Exception Code: c0000005 Exception Offset: 0003ea02 OS Version: 6.3.9600.2.0.0.256.48 Locale ID: 3079 Additional Information 1: 5861 Additional Information 2: 5861822e1919d7c014bbb064c64908b2 Additional Information 3: 01d7 Additional Information 4: 01d7340064827245f2249cd1f1a7c264 Extra information about the problem Bucket ID: f590128470f906c49d927fdeccae68be (-322973569) Uplay is working fine, even reinstalled, but I cannot get the game to run at all my game is finally running again After nearly changing every variable I got an advice from Ubisoft support to install the game on C (my steam folder is located on D, so all games go on D as well) and it worked. The reason being is - at least to my expertise - the Windows Storage Spaces. My D drive is located on a Storage Space which I use for over a year now (since Windows 8 came out on day one). With Windows 8.0 it worked fine with all Ubisoft games before, but with Windows 8.1 it doesn't - only AC1 works with that configuration. I haven't tested all of them yet (since I don't have that much room on my C-Drive) but I'm pretty sure thats the reason for it... I don't know for how many of you this may be helpful, but at least I am able to jump around the Caribbean now
OPCFW_CODE
Why am i running outta disk space? Hello Linux users and please excuse my dumb question! I remember a couple months ago when i was setting up my OS, i allocated a 100gb partition for Linux Ubuntu, and according to what i have read on the internet at that time 100gb seemed more than enough, however since yesterday i'm not able to download new programs using the sudo command in the command line and i get frequently a notification telling me that i'm running outta space, i also do not understand why my Ubuntu OS is only using 14.7gb when i have allocated 100gb to the system! Thanks in advance, much love! Apparently, you have not allocated 100GB to Ubuntu. :~) It seems that most of that space went to that /home partition which is still Ubuntu isn't it ? Unless you specifically put your /home in a different partition, it is included in the / partition. i just provided another screenshot could you check it out please! This doesn't tell you much other than you allocated 14.7 GB to root. Run lsblk which shows how much was allocated to your home folder. well knowing that i own a 500gb SSD , would you recommend me to just reinstall the system in order to allocate more space to root ? Instead of the screenshot of the "device and locations" you provided please upload a screenshot of the app Gparted. You may have to install it first. Select /dev/sda from the top right drop down menu if it is not already selected. It will not have any personal identifier to black out either. 14.7 GB is not 100GB. Applications and snaps will install to the almost-full 14.7GB / (root), not to the 49.3GB /home. It's a common misunderstanding for many new users. Ubuntu does not require a separate /home partition or separate swap partition - those are options in the installer, but not the defaults. So my only option to get more root space for apps and snaps is to reinstall the system ? Boot into Live Ubuntu. Open Gparted. /home (sda7) is too large. Reduce it to 30GB, leaving 19GB free before it. You don't need a partition for linux swap. You already have a swap file in use - /swapfile so you can delete sda6. That will leave about 34GB of unallocated space. You then increase sda5 to fill that space which will give you just under 48GB for root, which is ample space. Atm using Gparted i'm shrinking 10gb from /dev/sda4 hoping i'll be able to add that space to the root partition, im just praying to god this won't break my pc lol, i will follow your instructions right afterwards! 150 GiB is allocated to all Ubuntu partitions combined. 2) Using the Ubuntu defaults of no dedicated swap partition, and just one partition (with /home under /) is the most efficient use of space. 3) The only justification for a dedicated swap partition would be if you are using Hiberation. Are you using Hibernation? Take a look at https://help.ubuntu.com/community/SwapFaq and , if not, size the swap file accordingly. Thanks everyone problem solved, i had to swap-off the linux-swap partition using Gparted then cut some space from it, finally allocate that free space to /root thanks again. You allocated a tiny potion to /, which is where programs get installed; you also need space there when release-upgrade time comes (to download the upgrades and then install). The space allocated to /home is not available for system applications to use (you reserved it for user files; do you want the system to ignore what you told it??) I like 32gb for / myself, though 25gb is recommended for most desktop users. Your allocated space left almost no space for adding programs, fine if that's you; it isn't my expectations though on my system. Well i just ad-hoc-ed it for the moment, now that i know what i'm doing i'm planning on giving my root at least 100gb Problem solved, i just had to cut some space from Linux-swap and append it to root using Gparted, thanks everyone for your help.
STACK_EXCHANGE
Is it okay to have number of notes in a bar that doesn't conform with the time signature? Just a newbie question. In musical notation, is it okay to have number of notes in a bar that doesn't conform with the Time Signature? For example, look at this music sheet: Can you see that the first bar only has a 1/4 note while the Time Signature is 4/4 (judging from the rest of the bars). There is no time signature in this example. There is nothing indicating what gets the beat and how many make up the measure. Yes. This is a pick-up bar, also known as an anacrusis. This melody starts on beat 4 and so this note could also be called an up-beat. That is why the first bar is incomplete. When this happens the last bar should have a complementary number of beats (in other words, the number of beats in the time signature minus the pick-up bar, 3 beats in this case). As the music starts on beat 4, the first note feels "weaker" rhythmically than the second note, which happens on beat 1, and so has rhythmic emphasis. Also, when you have an incomplete first bar, this is usually numbered "bar 0", with the first full bar being "bar 1". Bob, I don't think there a requirement for the last bar to have the complementary number of counts. It happens, for example in rounds, where there is an explicit requirement to repeat the melody line. But there are many scores where the piece starts with a pick up measure and ends with a full measure. @RolandBouman that's not musically correct though. If there is a pick up measure theoretically there should be a measure at the end combined with the first measure to be the same value as the time signature. Else the first measure isn't really a pick up measure it just starts in a different time signature. I'll take your word for it RB; I'm having a good look around the scores/music on my desk and I can't find anything with an anacrusis that doesn't have a shorter last bar…! It makes sense to have the 3 beats in that last bar, - signified by a double line.That adds to four beats, with the anacrucis. Were the piece to be counted in, most of us would count 1-2-3-.If there was a repeat, it would more likely be on the first bar line. Is this behavior allowed at bar besides the first bar? @suud - no, it can't. Unless there is a time sig. change. If it is 4/4, then there has to be 4 beats in each bar, otherwise it's not 4/4 !! For a long time now, I've omitted putting 4/4 at beginnings. Far more tunes are in 4/4 that I see it as the 'default' time sig., therefore not required. For any other, yes it gets put in. BUT - how long would it take for a reader to ascertain the time sig. of any tune - the first proper bar is a great clue ! @BobBroadley Maybe I'm wrong. A quick check of the scores on my desk indeed seem to verify your remark, not mine. I do see many scores that don't start on beat one, but they actually fill out the start of the first bar with rests, so that it does add up. I always called those pick up measures too. Would that be the correct term in that case? If not, what woudld be? @RolandBouman - since you 'play' those rests, it should be regarded as a full bar. You'd certainly 'play' them if there was a second time round. @Tim, right. But the listener doesn't know that. Certainly we're interested in naming the musical phenomenon of the piece starting not on the first beat of meter rather than in the exact device we use to denote that? @RolandBouman - why should the listener need to know that? It wouldn't matter here what the time sig. is. @Tim, my point is simply, what kind of thing does the term "pick up measure" refer to. Is it to this notational device, where the first bar is "incomplete" or is it the musical phenomenon of a piece not starting on the first downbeat. Wikipedia gives this definition: "In music, an anacrusis is the note or sequence of notes which precedes the first downbeat in a bar." That definition fits both the example of a first bar starting of with rests, as well as an incomplete first bar. Here is another sheet version that conform with 4/4 signature: Song of Time. Would there be any difference to the melody effect between one with pick-up bar and one without? Yes, the version you link to would sound very strange. Even though in theory it looks the same, and is played the same, the rhythmic stress of the first beat of the bar is now on your first note, rather than second note. This rhythmic stress would be changed throughout the melody, in fact. Wow, really? So do you think the first version (with pickup bar) sounds better? Here is the full song's link. Can you determine the song uses pickup bar or not? Yes, absolutely! The second note of the recording you link to is definitely on beat 1, it is the first rhythmically strong note. The tune seems to be in some sort of D.Possibly D Dorian. With an anacrucis on the dominant A, it fits well. About the last bar should have complementary number of beats, is it really compulsory or will have side effect if not followed? I saw Happy Birthday's sheet doesn't follow this rule, the last bar is still 3/4. Yes, you see the problem there is, that doesn't look like conventionally published music… I would definitely finish Happy Birthday with a minim. As others pointed out, the piece you cite has a "pick up measure". Note though that it is not categorically ok for measures to not add up to the number of beats in the time signature, it can only happen at the first measure. There is another case where you can have an apparent mismatch in the number of notes and the time signature. This happens if the measure has "grace notes". Grace notes "don't count" - they don't add to the total duration for that measure. To elaborate on @keshlam's point about older music, there are all sorts of musics for which regular measure lengths are simply not part of the genre. Go back far enough and you'll find non-mensural music, such as Gregorian and pre-Gregorian chant. The music of the trobadors (11th-13th centuries) was not noted with rhythm, and there's an argument made that that's how it was performed. More recently (16th century) you'll find mensural music (i.e. music in measures) where the music changes time signature very frequently. Like, randomly. There was an artistic movement in late 16th century France called musique mesureé, which was somewhat experimental: it was a form in which the composer set poems by deriving the note durations from the rules (as they were understood at the time, and applied to then-modern French) of classical Latin poetry. Astonishingly for such a mechanical gimmick, the resulting music is very attractive and accessible, even while having pretty much completely randomly fluctuating bar lengths. In my experience, modern editions of the scores for this repertoire don't even bother notating the time signature, or handwave through it by calling the beat the measure and declaring the piece to be in some flavor of 1. (I think a crucial part, for modern musicians, of learning to play Renaissance music is getting over the anxious need to be told what time signature you're in all the time.) I understand something similar is true for znamenny polyphony, for which basically measure marks are lies, despite being contrapunctal and even in a sense syncopated. (My director: "I, uh, will be beating 1, unless any of you guys have a better suggestion.") And then there's modern music which, really, can do whatever it feels like. I remember from when I was a kid, but now cannot find, a really gorgeous concert band piece* which was in 4/4 except whenever it felt like it being in 5/4. So, yes, measures can have whatever number of beats in them the composer wants to put in them, and composers can dispense with measures altogether. But that? Your example? That's a pick-up, as explained above. * Named, completely unhelpfully, "Passacaglia". And I have no recall of the composer. It should also be noted that older/traditional music styles do not always fit our rigid definition of measures. As with time signatures like 13/16, this is sometimes because they were written to go with specific dances and reflect the fact that some steps really do take a bit longer than other steps. It may also simply be that the composers/performers/dancers didn't feel as strongly that everything had to occur on a perfectly regular downbeat. (I've seen a number of instances of this in music from the middle ages.) And I've seen departure from regular measure lengths in more recent pieces, where the author/composer deliberately chose to do something unexpected... which, after all, is part of the definition of good music; set up expectations, then artfully break them in a way that seems reasonable. But in those cases the sheet music will generally have an explicit time signature change. but even (or uneven) something like 13/16 should retain that meter all through, otherwise there's no point in having the time stated at the beginning. As I said: It's common practice now to mark the change. In older music, it wasn't always... and, of course, there wasn't always a time signature stated at the start. ... And of course there's always the risk of transcription error. @keshlam, I think you're confusing complex meters like 13/16 with time changes, and both with nonmensural music. I mentioned all three, citing the complex meters in passing in showing one reason that single-measure time changes occur and attempts to represent nonmensural in modern notation as another. I probably didn't distinguish them clearly enough; that's a valid critique. As others have noted, yes, this is a pick-up measure and is valid. However, in this specific case, I believe the Song of Time is not scored with a pickup measure. This is how I would score it: In The Legend of Zelda: Ocarina of Time, all ocarina melodies start on a full measure. Nope, you're both wrong. Excepting the first two bars and their restatement, the piece is in 3/4. Note the eighth note F in the first line is a down beat, as is the subsequent eighth note C. @all: Song of Time is actually the first phrase part from Temple of Time. I don't know if that Temple of Time sheet is official or not. The only official music sheet that closely resembled to song of time's tune is Door of Time / 時の扉 and it's in 3/4. Maybe you can look at the Temple of Time's video link and tell me if it's 3/4 or 4/4. The Song of Time (and extended Temple of Time) are definitely both 4/4 time...
STACK_EXCHANGE
Tim Berners-Lee Sir Timothy John Berners-Lee OM KBE FRS FREng FRSA (born 8th June 1955) is a British computer scientist, who is credited with inventing the World Wide Web. On 25 December 1990 he implemented the first successful communication between an HTTP client and server via the Internet with the help of Robert Cailliau and a young student Events in the Life of Tim Berners-Lee. 1989-03-12 Computer scientist Tim Berners-Lee submits his first proposal for an "information management system" to his boss at the European Organization for Nuclear Research (CERN) who finds it “vague, but exciting” The vision of Tim Berners-Lee's vision of the future Web as a universal medium for data, information, and knowledge exchange is connected with the term Semantic Web. In 1999 he wrote: "I have a dream for the Web in which computers become capable of analyzing all the data on the Web—the content, links, and transactions between people and Tim Berners-Lee first started to come up with code for his WWW project in 1990. The first mention of him working on code for processing HyperText can be found in the original HyperText.m file that Tim worked on, dated 25th September 90. Jeffrey Powers @geekazine aggregator, albert vezza, cambridge massachusetts, day in tech history, delta, edward r murrow, first meeting, flights, historical events, IBM, institute of technology, massachusetts institute of technology, massachusetts institute of technology mit, Microsoft, microsoft releases windows, MIT, Podcast, technology Tim Berners-Lee. It was the English scientist, Sir Tim Berners-Lee, who invented the World Wide Web in 1989 while working at CERN in Switzerland. It used a technology called Hypertext Transfer Protocol (HTTP) that transmitted data over TCP/IP, which is why all URLs start with “HTTP” to this day. Tim Berners-Lee was born in London, England, in 1955. He holds a B.A. in physics from Oxford University (1976). While working as an independent contractor at the European high-energy physics laboratory (CERN) in 1980, Berners-Lee built a prototype system for document sharing among researchers based on hypertext called ENQUIRE. Tim Berners-Lee is the inventor of the Web. In 1989, Tim was working in a computing services section of CERN when he came up with the concept; at the time he had no idea that it would be implemented on such an enormous scale. Particle physics research often involves collaboration among institutes from all … The World Wide Web: The Invention That Connected The World Feb 11, 2019 Tim Berners-Lee | Biography, Education, & Facts | Britannica Jun 04, 2020 The World Wide Web: A very short personal history Tim Berners-Lee. In response to a request, a one page looking back on the development of the Web from my point of view. Written 1998/05/07. The World Wide Web: A very short personal history. There have always been things which people are good at, and things computers have been good at, and little overlap between the two. The World’s First Web Site - HISTORY
OPCFW_CODE
package it.synthema; import java.util.ArrayList; import java.util.List; /** * * Builds an SubRip (.srt) file from a speech transcription passed as argoument. * * @author ercole * */ public class SrtBuilder { //private final static Logger log = Logger.getLogger(SrtBuilder.class.getName()); private long max_silent_threshold; private int max_characters_length; private double max_srt_line_duration_factor; /** * Generates the srt from the array of transcriptions. * * @param list * @return Srt object from where the srt file can be build. */ public Srt build(List<TranscriptedWord> list) { // log.debug("Start Building"); if(list.size()==0) throw new IllegalArgumentException("Cannot create an srt if the transcription list is empty"); //recursion base case. //if the list is composed by only one word I can't split more. if(list.size()==1) return new Srt(list); // I split the transcription sequence if // I find an enough big silent interval Split max_silent_interval = new Split(list); // if max_silent_interval is null means that there is not an enough big // silent interval in the sequence, in that case I split if the sequence // is // too long in terms of number of characters including whitespaces. if (max_silent_interval.getMaxSilentInterval().getTimeInterval() <= max_silent_threshold) { if (getCharacterNumber(list) <= max_characters_length) { return new Srt(list); }// else I will split, which is performed after the end of next two // if branches } return merge(this.build(max_silent_interval.getLeft()), this.build(max_silent_interval.getRight())); } /** * Merges the two srt passed into the returned one. The merging phase is * performed by adding to the srt lines of srt1 with the srt2, the ordering * is preserved. So the srt lines of srt1 will be before of the lines of * srt2. Another feature is that the last srt line is extended in time until * it reaches the maximum srt line duration or the starting time of the * first srt line of srt2. * The srt passed must be well formed. Otherwise the method returns a generic RunTimeException. * * @param srt1 * First srt to merge. * @param srt2 * Second srt to merge. * @return Merged srt. */ public Srt merge(Srt srt1, Srt srt2) { List<SrtLine> lines1 = srt1.getLines(); List<SrtLine> lines2 = srt2.getLines(); ArrayList<SrtLine> ret_lines = new ArrayList<>(); if(lines1.size()==1 && lines2.size()==1 && lines1.get(0).getSecondLine()==null && lines2.get(0).getSecondLine()==null){ //case where the both srt have only one srt line and both //do not contains the second line //then I must merge them into one single SrtLine ret_lines.add(new SrtLine(lines1.get(0).getFirstLine(), lines2.get(0).getFirstLine(), lines1.get(0).getStart_time(),lines2.get(0).getEnd_time())); }else{ //otherwise I simply join the srt lines and //I adjust the end time of the last line of the first srt // iterate lines1 and skipping last srt line for (int i = 0; i < lines1.size() - 1; i++) { ret_lines.add(lines1.get(i)); } // setting the new ending time of the last srt line of srt1 SrtLine last_line = lines1.get(lines1.size() - 1); long first_line_start_time = lines2.get(0).getStart_time() - 2; last_line = adjustLastLine(last_line,first_line_start_time,this.max_srt_line_duration_factor); ret_lines.add(last_line); ret_lines.addAll(lines2); } return new Srt(ret_lines); } /** * Modifies the ending time of the SrtLine passed as a parameter and returns it. * The starting time of the next srtline is passed as parameter. * The following contraints are followed:<br> * - The result strline will not overlap with the next line srtLine. <br> * - The duration of the srtline will not exceed the original duration multiplied the factor passed as parameter. * * @param last_line The SrtLine to modify. * @param next_line_start_time The starting time of the next line of the srt. * @param max_duration_factor Maximum SrtLine duration factor. * @return The modified last_line object. */ public static SrtLine adjustLastLine(SrtLine last_line, long next_line_start_time, double max_duration_factor) { long new_possible_end = last_line.getStart_time()+(long)(((double)last_line.getDuration()) * max_duration_factor); long new_end_line; if(new_possible_end<next_line_start_time) new_end_line = new_possible_end; else new_end_line = next_line_start_time; last_line.setEnd_time(new_end_line); return last_line; } /** * SrtBuilder constructor which takes some parameters. All the time values are intended in milliseconds. * @param max_silent_threshold Maximum Silent interval possible inside an srtline. * @param max_characters_length Maximum number of characters inside a line of an srtline. * @param max_srt_line_duration_factor When the srt lines are created, the end time where they are displayed * could not be the same end time of the last transcripted word of the srt line. * If there are not other following transcriped words, which start time could overlap, * the duration of every srt is enlarged by this factor. * Is required a value bigger than 1. * * */ public SrtBuilder(long max_silent_threshold, int max_characters_length, double max_srt_line_duration_factor) { super(); this.max_silent_threshold = max_silent_threshold; this.max_characters_length = max_characters_length; this.max_srt_line_duration_factor = max_srt_line_duration_factor; } public static int getCharacterNumber(List<TranscriptedWord> list) { int sum = 0; for (TranscriptedWord word : list) { sum += word.word.length(); } sum += (list.size() - 1); return sum; } }
STACK_EDU
Reflection / Burn Logic Hey team, Great work so far. I have seen people ask these questions, and I have also wondered myself, but if you can help us understand this, that would be great. Is the burn + reflection logic manual? If not where in the contract does this logic reside? Or, is the burn and reflection logic residing in another contract at another address? It is understood, that the 10% tax exists in the contract, as the two fee variables declared in the token contract: contract SafeMoon is Context, IERC20, Ownable but where do we find the logic to: Reward tokens to holders, weighted by how many tokens they hold. I see references to _rOwned and _tOwned Is it these two functions: function _transferToExcluded(address sender, address recipient, uint256 tAmount) private { (uint256 rAmount, uint256 rTransferAmount, uint256 rFee, uint256 tTransferAmount, uint256 tFee, uint256 tLiquidity) = _getValues(tAmount); _rOwned[sender] = _rOwned[sender].sub(rAmount); _tOwned[recipient] = _tOwned[recipient].add(tTransferAmount); _rOwned[recipient] = _rOwned[recipient].add(rTransferAmount); _takeLiquidity(tLiquidity); _reflectFee(rFee, tFee); emit Transfer(sender, recipient, tTransferAmount); } function _transferFromExcluded(address sender, address recipient, uint256 tAmount) private { (uint256 rAmount, uint256 rTransferAmount, uint256 rFee, uint256 tTransferAmount, uint256 tFee, uint256 tLiquidity) = _getValues(tAmount); _tOwned[sender] = _tOwned[sender].sub(tAmount); _rOwned[sender] = _rOwned[sender].sub(rAmount); _rOwned[recipient] = _rOwned[recipient].add(rTransferAmount); _takeLiquidity(tLiquidity); _reflectFee(rFee, tFee); emit Transfer(sender, recipient, tTransferAmount); } includeInReward is the only function I see that is making use of iterating over accounts, aside from _getCurrentSupply Does the includeInReward function set who in included in the reflection? function includeInReward(address account) external onlyOwner() { require(_isExcluded[account], "Account is already excluded"); for (uint256 i = 0; i < _excluded.length; i++) { if (_excluded[i] == account) { _excluded[i] = _excluded[_excluded.length - 1]; _tOwned[account] = 0; _isExcluded[account] = false; _excluded.pop(); break; } } } secondly, where is the logic to Burn the other 50% (from the 10% tax). I see the burn function, and event declared but don't see where they are invoked. I also don't see the burn event is ever emitted in this contract. These are pretty straightforward answers, and I think this will help quite a bit of supporters, including myself. Any help on this would be greatly appreciated Love the approach overall 🚀 How it works 10% Tax Here The Owner has exkluded himself https://github.com/Safemoon-Protocol/safemoon.sol/blob/0f0aef2f4e6ca00d6a46ca6ea60caa4d36c5fd6f/Safemoon.sol#L1114 Everybody else pays the tax https://github.com/Safemoon-Protocol/safemoon.sol/blob/0f0aef2f4e6ca00d6a46ca6ea60caa4d36c5fd6f/Safemoon.sol#L1134 5% gets added to the LP https://github.com/Safemoon-Protocol/safemoon.sol/blob/0f0aef2f4e6ca00d6a46ca6ea60caa4d36c5fd6f/Safemoon.sol#L964 the contract adress gets the LP Fee _rOwned[address(this)] = _rOwned[address(this)].add(rLiquidity); 5% Token gets Burned https://github.com/Safemoon-Protocol/safemoon.sol/blob/0f0aef2f4e6ca00d6a46ca6ea60caa4d36c5fd6f/Safemoon.sol#L921 the burned amount just gets added to an variable and are taken away from the total supply variable as per this code there is no Burned Wallet you just get returned an int of the total burned amount. Whats the 0 Adress? With no surprice the owner of the contract himself. https://github.com/Safemoon-Protocol/safemoon.sol/blob/0f0aef2f4e6ca00d6a46ca6ea60caa4d36c5fd6f/Safemoon.sol#L481 and here is how the owner will rugpull out set Tax to 100%, 0% 50% He can make prevent people of pulling out of this contract https://github.com/Safemoon-Protocol/safemoon.sol/blob/0f0aef2f4e6ca00d6a46ca6ea60caa4d36c5fd6f/Safemoon.sol#L899 Think about if Satoshi could just 100% the fees all by himself BTC would be worthless This Code was copypasted and a couple of lines are changed. So there is some dead code in ther that dosent run or is runned by another contract but i dont know ether. Unfortunately, he can. This is often used as a defense mechanism against hackers, but it can be updated with a better one. Many lines should be included https://www.certik.org/projects/safemoon it got adressed here aswell. its an issue that needs to be adressed https://www.certik.org/projects/safemoon it got adressed here aswell. its an issue that needs to be adressed For example people buy and then they change tax to 100% and now only owner can sell tax free right? but if owner have no tokens to sell and liquidity is all burned into dead wallet. This happened with me but i am not sure how owner can rug? when he have no tokens and liquidity is burned how he can be benifit by stoping us from selling?? kindly reply https://www.certik.org/projects/safemoon it got adressed here aswell. its an issue that needs to be adressed For example people buy and then they change tax to 100% and now only owner can sell tax free right? but if owner have no tokens to sell and liquidity is all burned into dead wallet. This happened with me but i am not sure how owner can rug? when he have no tokens and liquidity is burned how he can be benifit by stoping us from selling?? kindly reply You are right. It is a centraliced company that tries to sell decentralized crypto but holds all the cards. example: if i am an exchanche and i want to sell safemonn on my own bag they can just shut down the exchange adress. Other mayor tokens have given that power away by using a concens algo. to add: the owner has acces to the liq pool and they can swap and liquify to bsc. the safemoon dev is selling his bag aswell. the owner has not implemented the reflection code in this repo. we dont really know how it works. they could take 50% of all reflections and no one would know. if the private key of the owner ever gets leaked its over. its a one point of failure. Videos of the safemoon company employed have surfaced where they eat golden beef. wen v2 Hi, I implemented burnable safemoon token in this repository https://github.com/mamadeusia/BurnableReflectionToken I also checked the functionality of code with python in jupyter-lab. if you have problem with it feel free to ask .
GITHUB_ARCHIVE
Assigning organizational unit permissions To enable a user to view, edit, or delete tickets that all belong to the same organizational unit, you can configure a field as an organizational unit for a contact in the address book and assign appropriate container role permissions. For information about setting a field as an organizational unit for a contact, see To configure a contact item. Setting a field as an organizational unit is useful when supervisors want to view or update all the tickets created by users of their department. It saves time because users with the same organizational unit can implement a resolution or respond to a customer for all tickets of their organizational unit without a delay. After you set a field as an organizational unit in the container role, you can configure the View, Edit, and Delete item permissions for the organizational unit. For example, you have created a field named Company in both workspace tickets and address book contacts. For user John Smith, the field Company is set as an organizational unit in his contact record, with the value Calbro Services. You have configured a Ticket/Contact relationship between the workspace and the address book. This relationship enables John to view all the tickets in the linked container that have the value Calbro Services in the Company field. The following topics are provided: - By default, the View, Edit, and Delete permissions are set to No for the agent and customer roles. However, you can assign these permissions to the agents and customers. You cannot assign organizational unit permissions for the Guest role. - You can assign the organizational unit permissions for: - ticket items in workspaces - all items in CMDB - for all the items except work targets in Service Portfolios. - If you use an external address book (such as LDAP, SQL, Salesforce, and so on), the information about the contacts in that address book is not stored in the FootPrints database until one of the following events occur: - The contact is linked to a ticket. - The user associated with that contact logs in to the system. The organizational unit feature uses the information stored in the FootPrints database. Even with organizational unit permission, users who have never been linked to a ticket or have never logged in to the FootPrints system cannot view or edit any records. Sending an email message is the most common example of a user trying to view or edit an existing ticket, without logging into the system. - If the organizational unit value has changed in the external address book, FootPrints does not recognize it until the next time the user is linked to a ticket. Linking an external contact to a ticket updates all contact information stored in the FootPrints database. For example, Department is the organizational unit for a user that works with the Marketing department. After some time, the user moves to the Sales department, but the local copy of the contact still has Marketing as the organizational unit value. This value is updated to Sales the next time the contact record is linked to a ticket. - If you have multiple address books linked to a container, and these address books store copies of contacts for the same user, the system sorts all the address books based on the date on which the address book is created. By default, the system sorts in oldest to newest order until it finds the first matching contact for a user. The system considers this contact as the user contact and applies the organizational unit permissions for that contact. - If you want to change the sorting order of the address books, you must contact BMC Support. Before you begin - Ensure that you have configured a field on the user's contact item as an organizational unit. - Ensure that you have configured a field on your ticket record definition with exactly the same name as the organizational unit field you configured on the user's contact item. - Ensure that you have configured a relationship between the container and the address book. For example, a Ticket/Contact relationship between a workspace and address book. For more information, see Configuring relationships. - On your container form, use the relationship that you defined between the container and the address book to configure a link control. Make sure the field selected as the organizational unit on the user's contact item is selected as a Linked Field. For more information, see Configuring link controls on forms. To assign Organizational Unit permissions - Click the Administration tab. - In the User Management section, click Roles. - Select the container role that you want to modify and then click Edit. - Click the Item Permissions tab and expand the Ticket section. - To assign the viewing permission, in the Permission Name column, expand Viewing. - Select Organizational Unit and set the Access Level to Yes. - To assign the editing permission, in the Permission Name column, expand Editing and repeat step 6. - To assign the deleting permission, in the Permission Name column, expand Deleting and repeat step 6. - Click Save. The new permissions are applied immediately for all users assigned to the respective container role.
OPCFW_CODE
I like colors. The more saturated the better. Many years ago I bought several color power LEDs and had it took me this long to find a way to use them. I have now built seven lamps with power color LEDs. One of each color I have. |LED Part number||Color name| Each lamp consits of a hollow oak shell with an LED inside. The base is 7cm square and the lamp is 9cm high. All sides have a 2cm wide and 2.5 cm high cutout for connecting the power cable. The electronics in the lamp are based on the Femtobuck LED driver from Sparkfun. It is a circuit using the AL8805 from Diodes Incorporated. I use it in the default configuration to drive each LED with a constant current of 330mA. The circuit inside each lamp is basically a Femtobuck board, an LED, and a power connector. I did however break two LEDs due to connecting the power with reversed polarity, so I added a P-channel MOSFET-transistor to prevent that. The circuit did also use twice the amount of current for a short while when connecting the power, so I added an RC-circuit as a slow-start control voltage to the LED driver. This all resulted in the following schematic: The insides of the lamp is a 3D-printed encapsulation box, a lens and mounting bracket for the LED, and the LED driver inside. This is what it looks like without the outer shell: I have a CCS100 compact specrometer from Thorlabs and have used it to measure the power spectrum of each LED lamp. There are however many sources of errors. The LEDs are several years old and some of them were broken or degraded. This can be from bad handling on my part, or old age. I have also only measured one LED of each kind and my measuring setup is not checked for repeatability. The brightnes of the LEDs wary debending on how I direct the lamp as well. I measured using the following setup, with the difference that the overhead lights are turned on in the photo below: I measured the specrum of each LED and got the following result: Each power spectrum can then be used to calculate the chromacity coordinates for the CIE 1931 2° Standard Observer. I got the following results: The Y column gives a general sense of the apparent brightness of each LED. It is however probable that my test setup is not stable enough to compare this between the LEDs. The lens in the lamp has some high-spots witch make it hard to align repeatably. To get a sense of what the x and y numbers mean, I have put them in the xy chromacity diagram: I put the sRGB gamut triangle in the diagram as a reference. It makes it easier for me to get a sense of how the colors compare to a computer screen or a regular RGB LED. I like the results of this project. The lamps look good and I can now reference various colors more easily. The LEDs are hard to power and the Femtobuck made that easy. I also have more experience with building things of wood and with 3D-printing. The spectrum measurements are also good to have as it gives me a good idea of what I can expect of future projects with power color LEDs. ©2023 Mats Mattsson
OPCFW_CODE
Cloud computing technology (e.g. Microsoft Cloud) means storing and accessing data and programs over the internet, instead of using the hard drive of computer. It means the delivery of different services through the internet. These resources include data storage tools, programs, servers, databases, networking, and software. Amazon Web Services (AWS) and Microsoft Azure are at the forefront of this service area. Amazon Web Services (AWS) and Microsoft Azure are described below briefly: Amazon Web Services (AWS) Amazon Web Services (AWS) is a stable cloud services platform that provides computing resources, database storage, content distribution, and other features to help companies expand and scale. In easy terms, AWS helps the server to run the web and application servers in the cloud to host dynamic websites. By being the first to market and the more developer friendly AWS has developed into the largest cloud services provider. It provides long-running and reliable storage services like AWS S3, EBS, and Glacier. AWS S3 guarantees high accessibility and automatic replication across regions. Amazon Virtual Private Cloud (VPC) allows independent networks to be built under the Cloud umbrella. Microsoft Azure is cloud computing software developed by Microsoft to develop, test, deploy, and manage applications and services via data centers operated by Microsoft. It is a comprehensive suite of cloud products that allow users to create enterprise-class applications without having to build out their own infrastructure. Azure offers enhanced tools for the companies which are already participating in Microsoft products bringing an existing network into the cloud. It provides long-running and reliable storage services like Blob Storage, Disk Storage, and Standard Archive. It uses temporary storage and page blobs for VM volume. Azure has Block Storage option as a counterpart to S3 in AWS. As a counter part of VPC, Microsoft Azure Virtual Network does all the things which is done by VPC. Cloud computing services are differentiated into four categories such as: infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS) and function as a service (FaaS). Often these are called the cloud computing stack, since they build on top of each other. These different types of cloud services and their benefits are described below: Infrastructure as a service (IaaS) The first form of cloud computing is infrastructure as a service (IaaS), which is used to access storage and processing resources through the internet. IaaS which is the most common category of cloud computing types rents the infrastructure from a cloud provider on a pay-as-you-go basis-servers, virtual machines, storage, networks, and operating systems. The main advantages of Infrastructure as a Service are scalability, cost-effectiveness, pay-on-demand for utilities, location independence, redundancy and the security of your data. It can provide better security than your existing software. Platform as a service (PaaS) The second type of cloud computing is platform as a service (PaaS) which gives developers the tools which is needed to create and host web apps. It is designed for providing the users with the access to the components. It runs web or mobile applications over the internet. It works with the setting up or maintaining the underlying server, storages, network components and database infrastructures. It has ability to improve a developer’s productivity. It provides direct support for business agility by enabling rapid development with faster and more frequent delivery of functionality. Software as a service (SaaS) The third form of cloud computing is software as a service (SaaS) that is used for web applications. It is a way of distributing software applications over the internet, where cloud services host and administer software applications. That function makes it simpler for all users to use the same functionality. It gives all users to access in the cloud when is needed. This will help every company to save money, time and human capital. It can deliver simplified focus and enhanced efficiency by eliminating problems such as system maintenance and incompatibility. Function as a service (FaaS) The fourth form of cloud computing is function as a service (FaaS) that offers forum to the customers. It helps to create, operate and manage application functionalities without the difficulty of constructing and managing the infrastructure. It is associated with the creation and the launch of an application. It focuses more on code, not infrastructure. It can split the server into functions that can be automatically and independently scaled to avoid network management. It helps you to focus on the application code which can significantly reduce the time to market. If you are looking for IT support to set up your company with a cloud technology then make sure to contact us for a free consultation.
OPCFW_CODE
Frequently Asked Questions The following lists questions and answers about using WSE. Visual Studio .NET 2003 - Question When using Visual Studio .NET 2003 to create a client application that uses a proxy that inherits from the Microsoft.Web.Services2.WebServicesClientProtocol class, why does updating a Web reference always cause the proxy class to inherit from the System.Web.Services.Protocols.SoapHttpClientProtocol class? - Answer This is a known issue, and cannot be avoided when using Visual Studio .NET 2003 to create a client application. Whenever you update Web references for a WSE client, you must change the base class of the proxy class back to the Microsoft.Web.Services2.WebServicesClientProtocol class. - Question When I click Repair in Add or Remove Programs, the Setup program does not remove or update the Microsoft.Web.Services2.dll file if an old version exists. - Answer Repair does not remove or update existing files; it only replaces missing files. Instead, use Add or Remove Programs to remove the program, and then add it to get a new version of the .dll file. - Question If the Microsoft.Web.Services2.dll assembly is in use by another process, such as ASP.NET, the .dll file is not removed when I click Remove in Add or Remove Programs. - Answer Ensure that no other process has locked the.dll file before clicking Remove. - Question When I attempt to run a Web service in Visual Studio .NET 2003, it fails with the error message "Unable to Start Debugger on the Web Server". - Answer Open a command prompt window, and then change to the following directory on the Visual Studio installation CD: wcu\dotNetFramework. From this directory, run the following command: - Question Why are target host names always lowercase? For example, using the following URI: Proxy.Url = http://LOCALHOST/genericTestService/genericTestService.asmxResults in the following outgoing message: - Answer WSE changes URL host names to lowercase by design. The host names in URI are not case-sensitive. The semantics of WSE and of HTTP are not affected in any way by this change. - Question Does WS-Security make my Web service secure? - Answer WS-Security is not a complete security solution in and of itself. It is a protocol for exchanging security information between message senders and receivers. Developers need to design an appropriate security solution and deal with potential threats, such as replay attacks. - Question How should a mustUnderstand custom header be processed? - Answer Users who want to process a mustUnderstand header should use a custom input filter, not content-based routing. The mustUnderstand fault checking is done before the ProcessRequestMessage method is called, so attempting to use content-based routing will always result in a mustUnderstand fault at the client. - Question Can I use the Certificate Creation tool (Makecert.exe) included in the .NET Framework SDK to create a test certificate that supports digital signing? - Answer Yes. An example command line is as follows:
OPCFW_CODE
Hackathon 2023: The capabilities and limitations of ChatGPT integration These are the six obstacles we encountered ChatGPT has been live for a while now. The possibilities are clearly limitless, and it is expected that more and more companies will start using this powerful tool in the coming years. We scheduled a day in our agendas to work together on three ideas using the ChatGPT API. The goal is to create a functional app in just one day (while also experimenting with new techniques and gaining valuable experience in the process). Written by Linda Let's start with product design Over the past few weeks, we've collected ideas, and today we are starting to work on three of those ideas in three groups (consisting of 4 or 5 people). It feels a bit unfamiliar for some of us since we usually receive product designs from our clients, and coming up with something completely from scratch is new to many of us. We went through some rough sketches, but the ideas are quickly taking shape. The building phase The groups know what they want to create, and tasks have been assigned based on individual interests. Coding can now begin. While one person delves into documentation to explore the possibilities of a specific tool, another sets up the backend, and someone else searches for the latest version of Node. Outsourcing to AI In the spirit of this hackathon, various tasks are being outsourced to Artificial Intelligence. For instance, you can ask ChatGPT to generate a list of options instead of coming up with a name yourself. The body of your website can be written in no time, and you can generate a logo using Logomaster. You can even ask ChatGPT to generate a prompt that can be later sent back to ChatGPT as input (a kind of ChatGPT-ception). The "hack" part of the hackathon In our usual work, we focus on aspects such as software stability, security standards, and clean problem-solving. However, none of that matters today. Solutions can be "dirty" and hardcoded, as long as they work (in the short term). For example, there's no need to write tests to ensure quality. Some people enjoy this freedom: "Finally, we don't have to worry about being dry!" However, others struggle with it, saying, "This is not how things should be done." Nevertheless, it's a welcome change, and it helps that we know we won't have to continue developing on top of this mess. These are the six specific challenges we encountered - ChatGPT tends to truncate long responses without warning, especially when requesting a specific response. This can break your entire application. To address this issue, search online for "Langchain OutputFixingParser" or ask one of us for help. - There is no built-in way within ChatGPT to store chat history and retrieve it later using the API. So if you want to save chat history, you need to store it using alternative methods. - The default "gateway timeout" (the maximum time a server waits for a response) is set to 30 seconds. ChatGPT often takes (waaaaaaaaaaay) longer than that to respond. Standard deployments don't work well because you have to wait longer. Therefore, we had to come up with alternative solutions. - ChatGPT lacks awareness of its own limitations. You can ask for specific things that ChatGPT claims to be able to provide, but the answers may be illogical, inconsistent, or even completely absent. - Combining fiction with intelligent creativity is challenging. We asked ChatGPT to create a riddle (which turned out to be a beautiful riddle), but it turned out that ChatGPT itself didn't have an answer to that riddle. - Due to the slowness of ChatGPT, end-users have to wait a long time for results. We are accustomed to immediate results, but with ChatGPT, there can be a waiting time of 2+ minutes. One of our solutions was to use Davinci for generating fun facts during the loading screen, as it often provides a response within 10 seconds. At quarter past 6, the moment has arrived: the presentation of the various applications built today. Out of the three groups, two have managed to produce a functional end product. Below, you can see what each team has created. The conclusion among the developers today was clear: a hackathon is fun, incredibly intense, and it's great to "go all out" in production without worrying too much about quality. As an added benefit, we have learned a lot about the possibilities (and limitations) of ChatGPT, and we can't wait to implement this knowledge into one of our future projects. Interactive story - RPG-style Team members: Bauke, Adriaan, Martijn, Remco, Rick The idea is to utilize GPT as an interactive storytelling tool. One or more players should be able to create a character, choose a setting, and potentially select a genre. The goal is to have the story respond to player input and generate subsequent scenes based on that input. In this case, the limitations of ChatGPT became apparent, particularly in terms of answer consistency. It proved impossible to build upon a previously generated story, riddles created by the chat had no solution, and the stories remained vague and disjointed. The group discovered that providing extensive restrictions and requirements in the prompt was necessary, but even then, a cohesive story did not emerge. However, a successful aspect was the generation of "player cards" using a text-to-image API. Travel Agency for Round Trips Team members: Roland, Raymond, Robin, Siebe The idea is to utilize GPT to propose a round trip. The user selects a destination and the number of days. GPT then generates an itinerary with locations and highlights of those locations. By initially asking ChatGPT to provide content for a website (including product USPs and three reviews), the website for "JournAI" quickly became visually appealing. The team worked extensively with hardcoded trips (for a while only Vietnam was available) to ensure that they could address different aspects, even if the API integration was not optimal. A beautiful initial foundation for an application to prepare travel itineraries. Users can enter their preferences in the text field, and a map with a suitable guide is generated (although according to ChatGPT, a "beach vacation" entails traveling the entire Mediterranean coast in 6 days). Team members: Ewout, Bren, Merel, Olaf, Ted The idea is to use GPT to automatically present a kind of DuoLingo e-learning program to the user based on a text or topic they provide. For example, the user enters the Wikipedia text about elephants, and then GPT turns it into a course including quiz questions. Because there is a wait time for ChatGPT to assemble the e-learning content after entering the original information, this group decided to add "something" to the loading screen. They chose to fetch facts about the entered topic from Davinci, providing the end user with something to engage with within 10 seconds instead of just staring at the loading screen. After a considerable wait, a beautiful e-learning course is generated. It includes subjects at three levels and a quiz question for each section of the text. The questions can be humorous, such as "What is your favorite dinosaur?" (which we got wrong, by the way). Additionally, the multiple-choice answers can be creatively filled in; for example, one of the options for a question about the function of an elephant's ears was "to fly."
OPCFW_CODE
I was playing around with "User accounts" and somehow set automatic login. Now, when I start my PC, it just has one button named as "login". Clicking that button, directly logs me in to my PC. There is no music or no asking for password while logging in. As a side effect, it asks me separately for keyring password How to disable auto login and make login/keyring password unified again like before? NOTE: Attempting to disable Automatic Login from System Settings > User Accounts does not work. This is the content of my /etc/lightdm/lightdm.conf (where I have commented the autologin for my username mgandhi): gksudo gedit /etc/lightdm/lightdm.conf It displays some text as follows: I had the same problem and it was solved by the command: sudo gpasswd -d USER nopasswdlogin where you should change USER by your user name. gksu gedit /etc/lightdm/lightdm.conf You should see something similar to this: Remove or put a # at the start of each line containing autologin and save/exit and reboot to test. Disable Automatic Login Go to your terminal and enter this: It will ask you with your new Unix password and solved. Then, later, you can go to your user account and change anything. I am running 12.10 and I tried everything in this thread and nothing worked. Eventually I tried deleting suspicious lines in lightdm.conf and was successful: Run gksu gedit /etc/lightdm/lightdm.conf gksu gedit /etc/lightdm/lightdm.conf I can't remember the exact line because I have since deleted it, but it's something like autologin-lightdm=true. Delete it. Hope this works! It's rare that @duffydack answer does not fit you... try this: on a terminal do: sudo vi /etc/gdm/custom.conf sudo nano /etc/gdm/custom.conf Your file should now look something like this: edit AutomaticLoginEnable=true to AutomaticLoginEnable=false that's other option. Ok, finally the problem is solved. I got an email from an occasional stack-exchange visitor Mr. Rafter. Following is the way: sudo grep nopasswd /etc/* This will display at least 2 lines: Edit those files with sudo and remove only <login name> from those lines and save. Figured out an answer: System Settings > User Accounts. Select your user and disable automatic login. was playing around with "User accounts" and I made same mistake, I got in same hole. You just set your system to no-password when "playing around". That's why your system log in (auto or not) without asking for password, and this is the same reason keyring system asks for it (cause doesn't received from the system). So, action jackson: CLICK THE PASSWORD BOX (its a hidden button, oh devil UI) this open a dialog box (!) where u probably hit the "login without password" option AND with automatic login option ON too (outside this dialog, on User Accounts screen). SO:... When you tell the computer to log in without a password (ITS NOT the same as Automatic Login) he does exactly this , log in, without password. So password = nothing. Then keyring goes crazy. (aha!) So, choose the right option now: Set a password now and after doing all the entry password stuff, simple hit enter, set automatic login off, to feel again the pleasure of having a password, and if you want it on again, ok, turn it on, but dont log in without a password again. =D I hope this helps you to solve your problem. Was a good lesson to me. Bad UI to Ubuntu -1 on this case. Open the file /etc/group (vi /etc/group) and find the group 'nopasswdlogin'. you will see your user name in that group. comment out that line (inserting # before) or just delete the line. This should do it. My /etc/lightdm/lightdm.conf was basically empty: It worked with: I changed to gnome classic fallback, I auto-login but it defaults to unity 3d, how do I make auto-login default to gnome classic? auto log-in gnome classic desktop : sudo apt-get install gnome-session-fallback sudo /usr/lib/lightdm/lightdm-set-defaults -s gnome-classic Run the following in a gome-terminal: If you want GNOME Classic with effects: If you want GNOME Classic without effects: sudo /usr/lib/lightdm/lightdm-set-defaults -s gnome-fallback
OPCFW_CODE
Go see it!! It’s a solid success for all departments here at Blue Sky Studios, a true testament to the hard work everyone put into this movie! Make sure to subscribe to our newsletter to receive all the details. Limited time only! The post Summer fun with Rigging Dojo – Solve the puzzle to see the news appeared first on Rigging Dojo. " We do not know what we do not know " I download pics I like from the internet and put them in a pix/ref directory for later. I have a bunch of these, and they are automatically backed up to my box.net account. Sometimes I really like artists and want all their images. For example, http://www.loish.net, or http://www.sergebirault.fr/portfolio.php, or in my latest case, http://www.du-artwork.de/. I really love Daniela’s work, but I hate her site. (ok, I don’t hate it, It’s just kind of a hassle and I’d rather view the work in windows picture viewer or adobe bridge.) I was right-clicking/downloading a lot of the images, and thinking to myself, “this is really repetitive.” I don’t like repetitive things. Well anyways, since she has named the images in sequential order 1-40 and had 3 different galleries, I banged out a quick script that will grab all of the images, and save them to your hard drive in a specific location. The script was thrown together in < 30 mins, so there is no error catching and could be optimized more, but who cares. I have some ideas for a V2 to make it better, but for now here it is: p.s. to the artists, don’t make it easy for me to download your images! """ This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. """ #tested on windows 7, running python 2.7 #grabs all images and saves them into a directory import urllib output_dir = "C:/Users/shawn/Desktop/Desktop-02-13-2013/art/art ref/daniela_uhlig/" output_file = "daniela_uhlig_" web_site = ["http://www.du-artwork.de/Gallery_1/bild/","http://www.du-artwork.de/Gallery_2/bild/","http://www.du-artwork.de/Gallery_3/bild/"] for gallery in web_site: gallery_num = web_site.index(gallery) + 1 for x in range(40): file_url = gallery + str(x + 1).zfill(2) + '.jpg' print file_url img = urllib.urlopen(file_url) output = open(output_dir + output_file + str(gallery_num) + "_" + file_url.split('/')[-1] + '.jpg', 'wb') output.write(img.read()) img.close() output.close() buying other business and incorporating them into themselves and they used to be creative team but they haven’t really made anything new, they probably have run out of things to do with that product, and are just making it slower. (a runon sentence I know) This is a generalization I’ve come to. And maybe they are just, plain lazy, or the company is now run by the originators and they all make 200,000 dollars a year. Or maybe they are working hard. Maybe thats just how things are. Maybe companies are meant to just do one thing well, and die. But that’s not how I feel, and If I ran a company, I would make something great, and move on. These huge software companies that make one product. It just blows my mind. And every year they add in a few new features that no one cares about, or works half ass, hell most of it has been made by community that uses the software. 1)Make it great 2)don’t bog it down. 3)Let me have a good bit of control over it. It’s not hard, at least it seems that way, but maybe it is. What the hell do I know. apples = 5; apples = a + 5; int main()Step Out: If we realize, "The error we're looking for isn't caused in this function", then Step out is used to exit that function and go back to the line that called the function (read more about keeping track of how other functions call functions in call stack part further down). int myApples = 5; int johnsApples = 10; int ourApples = Sum(myApples, johnsApples); int Sum(int a, int b) //no need for this, but for showing debugging, we do this. int returnSum = a; returnSum += b;
OPCFW_CODE
[Move] Reconsider Transaction::sender() and move_to_sender Take a look at RFC PR https://github.com/libra/libra/pull/3673 which: Introduces the Sender::T resource, which is just a wrapper around an account address. It exposes a function that allows reading the inner address. Eliminates every use of Transaction::sender() in the core modules by using a functional discipline: thread an explicit &Sender parameter everywhere that needs to read the sender. This is similar to ViewerContext (and related design patterns) in a codebase that is probably familiar to folks here. Similarly, adds a call to Sender::move_to(&Sender::T) (a newly introduced no-op function) before every use of move_to_sender. This doesn't actually do anything for now, but gives an idea of how the code would look if we replaced the move_to_sender<R>(R) bytecode with move_to<R>(R, &Sender). Now, the obvious next question is: how do I get a Sender::T resource given that the Sender module exposes no API for creating one? There are a few possible answers: Add a sender: Option<Sender::T> field to the LibraAccount::T struct that will be instantiated with Sender { addr } upon a call to LibraAccount::create(addr). Then, expose extract and fill APIs for the Option that allow the account owner to take their Sender resource out/put it back. This does not require any changes to the language or the Libra/Move adapter. Have the Libra/Move adapter create a Sender { addr } resource (where addr is the transaction sender) in Rust, then pass it in as a parameter of the transaction script (e.g., main(sender: &Sender::T) or main(sender: Sender::T): Sender::T). This requires a small change to the Libra/Move adapter and the bytecode verifier for transaction script signatures (for the sender param). Ways forward I am not suggesting that we commit this change as-is; it was more of an experiment to understand whether this sort of programming discipline would be reasonable. Having completed it, I feel that the result is reasonable to write, easier to read, and has some significant benefits (more on that in the next section). There are several paths we could consider from here: I. Decide we don't like this and leave everything as-is. II. Decide we like this as a design pattern, but want to keep both Transaction::sender() and move_to_sender. III. Decide we want to get rid of Transaction::sender(), but keep move_to_sender. This is still a fairly small change: delete the GetTxnSender bytecode + either (1) or (2) above. IV. Decide we want to get rid of Transaction::sender() and convert move_to_sender<T>(T) into move_to<T>(T, &Sender::T). This is more work because we will probably need to figure out how to make Sender::T a native type, and change the interpreter/verifier/compiler/prover to handle to the new move_to bytecode. V. We could also keep Transaction::sender() and ditch move_to_sender, but I don't think option makes a lot of sense. Why? This is the longest section, so I have left it for last. I think there are several unrelated, yet compelling reasons for considering some flavor of the change above. Explicit representation of sender authority Something that has bothered me for awhile is that fact that everything is explicit in Move except for the authority of the sender. If a procedure wants a coin from me, it can ask for it explicitly via (e.g.) give_me(c: Libra<LBR::T>). But, unfortunately, it can also just take it: fun take() { LibraAccount::pay_from_sender<LBR::T>(0xb0b, 1000) } In general, if you call a function like M::take() without looking at the body, you know it can't change your locals or stack. But it can change any piece of global state owned by M or its (transitive) dependencies, including resources published under the sender's address. If you want to be sure about M::take's effects, your options are: Look at M::take() and its callees carefully before you call it This is simultaneously obvious best practice and unrealistic advice (think of "look carefully at code you call for memory safety" errors in a different setting!). Move's static call graph and the small size of modules makes this somewhat reasonable for now, but better answers are possible. Use the Move prover If the Move prover required programmers to specify all effects of a function, you could determine whether a function might steal your money by looking at the postcondition. This is a better answer than trusting a human to look carefully, but still not ideal. Getting rid of Transaction::sender() and (to a lesser extent) move_to_sender provides a much better answer than the two above. If take() doesn't ask for a &Sender::T, it can't know the sender's address and thus can't[1] modify the sender's state. In addition, if we get rid of move_to_sender, we also know that take doesn't publish a resource under the sender's account. If take does ask for a &Sender::T, the situation is less nice, but no worse than today: the caller must read the code carefully and/or use the prover. Simplifying the save_account native function If we get rid of move_to_sender, we could replace save_account with create_sender(addr): Sender::T, then implement save_account in pure Move: save_account( account_resource: T, balance_resoure: Balance, ) { let sender = create_sender(addr); // create_sender would be a private native that packs/returns a Sender resource move_to(&sender, account_resource); move_to(&sender, balance_resource); // whatever other resources we want to put in a default account destroy_sender(sender); // this would also be a private native } Of course, variants are possible: let create_sender be public native. This is the "malloc" design we have discussed before; it is very powerful, but correspondingly scary. Other designs could also work. This feels more general than move_to_sender + save_account. Finally answering the question: are move_to_sender, GetTxnSender, and save_account fundamental to Move, or Libra-specific? I have wondered about this for a long time and feel like the answer is "not fundamental" to all (although some variant of move_to is fundamental). The combination of Sender::T and move_to<T>(T, &Sender::T) can implement the current Libra scheme for publishing resources , the new scheme described in this issue, and many other schemes as well (such as the "malloc" scheme above). Multi-sender transactions aka "atomic scripts" This is a bigger project for future work, but worth mentioning. Getting rid of both move_to_sender and Transaction::sender opens the door to an easy representation of a transaction with multiple senders (e.g. main(sender1:&Sender::T, sender2: &Sender::T)). Multi-sender transactions are useful for atomic swaps or (more generally) "atomic scripts"; any sequence of actions that requires >1 authority, but should only happen if they all happen together. I believe that a lot of the work on meta-transactions in Eth could be captured by multi-sender atomic scripts. TODO: lots more to say here about how this would work + the use-cases. [1] This isn't strictly true; for example, the code might modify resources under a hardcoded address that happens to be the same as the sender. But the caller can be sure that code with a safe API like withdraw_from_sender(...,&Sender) cannot be called Now, the obvious next question is: how do I get a Sender::T resource given that the Sender module exposes no API for creating one? Another option, you could set it and unset it in the prologue/epilogue. And block out calls to it with capabilities defined in the Account module I'll write up some longer thoughts later but initial brain dump I would like this a lot more with aliasing, so you can do &Sender instead of &Sender::T. So you could have something like module Transaction { native resource struct Sender; native public fun address(s: &Sender) -> address; } and then with aliasing just use Sender Alternatively. I think it makes a lot of sense to have some native bytecode setup for move_to. So you could make sender a new native primitive resource. With a native function/op code for getting the address. So Sender is a primitive type address(&Sender) -> address could be a native op code move_to<T: nominal resource defined in the current module>(&Sender, address) is nice create_sender in libra account is fancy I think the example with multiple senders is really sexy I have always really liked passing through the transaction info. And now that its just the sender, thats very neat and clean. I don't think we can do this in the next few months. But it is a good long term goal +1 to the native type and those two bytecodes. I think that is the nicest approach. There are some very nice things about this, if we can ever pul it off and obviously not a chance for V1. We want to avoid all stack walks in the runtime because they are disturbing. A stack model wants you to explicitly pass your state. It has the benefit that you see and understand what the side effects can be. get_sender always had a bad taste in that. An explicit sender concept would make much easier to understand possible side effects. So that is nice! The multiple sender is a really compelling example. Really compelling!!! How all of that translates to user code I am still not cleat about and I need more thoughts. Also the whole account creation (as you talk about) is a bit of a mystery still for me. Add a sender: OptionSender::T field to the LibraAccount::T struct that will be instantiated with Sender { addr } upon a call to LibraAccount::create(addr). Then, expose extract and fill APIs for the Option that allow the account owner to take their Sender resource out/put it back. This does not require any changes to the language or the Libra/Move adapter. I am not sure I understood usage of this, if can be overwritten and general expected usage Other random ideas on bring in the Sender resource via transaction script: main(&mut Sender) + move_to<T>(T, &mut Sender) APIs that want to publish a new resource will need to ask for a &mut Sender; other APIs only need &Sender You could imagine a stronger convention (though needs to be enforceable by the prover, too hard to do in the verifier) where you ask for a &mut Sender if you want to do a borrow_global_mut, move_from, or move_to and ask for a &Sender if you only want to do a borrow_global or exists Is this ready to close now? We could either close it now or once we've deprecated Transaction::sender() and move_to_sender... Perhaps we should close and open a separate task for deprecation? I don't think we need a separate task, and you just answered my implied question about what is left to do. Let's leave this open until we have deprecated (and hopefully removed) those 2 things. This is done now
GITHUB_ARCHIVE
The economics of privacy is, like anything else, a matter of trade-offs… The problem is that people can’t make informed decisions if they don’t know exactly what the trade-offs are. And they’ve proven that they don’t.[From Protect the Willfully Ignorant | Newsweek International Edition | Newsweek.com] I couldn’t agree more. As it happens, Consult Hyperion is part of a consortium that has just been chosen by the U.K.’s Technology Strategy Board to carry out a research project in this field, trying to find better ways to describe and display privacy so that the consumers and citizens can make informed choices, can negotiate around privacy in a constructive way and can deal more effectively with both corporate and government organisations. The article goes on to make a comparison that I’m not sure is entirely valid: the comparison is between privacy and safety, and the reason I’m unsure about it is because it uses the example of cars, seat belts and accidents — all of which are things that consumers understand and can experience in a way that they cannot with privacy (at least, they cannot until our research project bears fruit!). Anyway, the article says Car manufacturers let consumers pick engine sizes, color and the fabric on the seats, but not the design of the seat belt. “Consumers lack expertise about seat-belt design and don’t want to invest time learning about it,”… Rather than let people figure out the optimal seat belt for themselves, experts pick a standard.[From Protect the Willfully Ignorant | Newsweek International Edition | Newsweek.com] Ok, so let’s pick a standard. I vote for… er… hmmm… wait, I’ll get back to you on this. It may not even be possible to reduce privacy to a simple seat-belt issue, so that people need only fasten their seat belt (ie, run some piece of software or whatever) before browsing. Seat belts are there to mitigate against death or serious injury, either of which is a bad thing. Privacy is not so neat an issue and it is bound with the concept of digital identity, which embraces a spectrum of relationships embodied in different virtual identities with different degrees of disclosure, not a simple black/white, on/off (or alive/dead, for that matter). Perhaps the nearest equivalent might be some kind of “default to minimal disclosure” scheme which, in my mind, is the equivalent of a “default to a secure virtual identity anchored by two-factor authentication” implementation. In other words, make users actively select to do more than minimal disclosure rather than have it happen as a side-effect of merely being on the internet. These opinions are my own (I think) and are presented solely in my capacity as an interested member of the general public [posted with ecto]
OPCFW_CODE
04-15-2013 10:11 AM I have a Toshiba Satellite L875-S7110 with Windows 8 that is a few months old now, and had been working fantastic until last night. Windows did an update, and now I have a red X over my sound icon that states "No Audio Output Device is installed." Things I have tried so far to try to fix this: I went to the Device Manager > Sound, video and game controllers. I have "Intel(R) Display Audio" and "Realtek High Definition Audio" both listed with the little yellow caution icons on them. I have tried to disable, then enable each of them seperately and both together, which had no effect. I have tried to unistall both of them, restart the laptop, and let the drivers reinstall themselves, to no effect. I have tried downloading the latest drivers from both the Intel and Realtek product sites. The one from Intel seemed to install correctly and asked to restart the computer. There was no positive change to the audio though, stil the same message. The Realtek driver won't finish installing and throws the error: "Install Realtek HD Audio Driver Failure!! [Error Code: 0x000000FF]" If I go to Control Panel > Sound, under both the Playback and Recording tabs, it says "No audio devices are installed." If I go to Recovery, to try to roll back my updates with System Restore, I get errors on ALL of my restore points with the following message, "System Restore did not complete successfully. Your computer's system files and settings were not changed. Details: The specified restore point is missing or corrupt. Try again using another restore point. (0x81000201)" I am usually pretty savvy in being able to research fixes and get around my computers, but thus far I've spent hours trying to get my sound back with no success. I'll be so appreciative of some help. Thank you! 04-16-2013 12:33 AM Try the Toshiba Application Installer. There should be a feature that will check all that is missing. Go to Desktop Assist >Support Recovery > Toshiba Application Installer > then it will automatically scan your computer and replace what's missing. Assuming you are running Windows 8. 04-16-2013 12:38 AM It is not a good idea to download those apps individually. You could get a virus or sometype of add on that you won't be able to get rid of. 04-16-2013 07:23 AM Thank you for the suggestion cacoon3442. I didn't know about the Toshiba Application Installer. I didn't see a feature for checking all that is missing, however there was a list of applications on one side of the installer, and the option to install drivers for each on the other side. Here are my results for trying three sound related driver updates: Realtek Audio Driver - 188.8.131.5287 Runs most of the way until I once again get: "Install Realtek HD Audio Driver Failure!! [Error Code: 0x000000FF]" SRS Premium Sound HD - 1.12.4600 Starts running, but then gets a window stating: "The specified resource file is invalid." Intel Display Driver - 184.108.40.20628 Installs completely, computer restarts. There is still the red X over the sound icon. Running the Playing Audio troubleshooting still results in "Problems found: Intel(R) Display Audio has a driver problem. Not fixed." The report says issues found: "Intel(R) Display Audio has a driver problem. Not fixed. (red X icon)" ; "Reinstall device driver. Completed." ; and "Check audio device. Detected. (caution icon)". Any other suggestions? I just can't believe the audio output device would get so corrupted and stop working after a "critical" Windows update. 04-16-2013 07:32 AM By the way, I have this under the General tab of the Intel(R) Display Audio Properties window (from Device Manager): "Windows cannot start this hardware device because its configuration information (in the registry) is incomplete or damaged. (Code 19)" This same message is on the General tab of the Realtek High Definition Audio Properties window. 11-12-2013 11:23 AM This worked for me like a miracle! 11-12-2013 11:25 AM This worked for me right away: 11-12-2013 10:36 PM - edited 11-12-2013 10:41 PM If the other suggestions fail try the following: 1. Reboot in safe mode 2. Open device manager 3. Go under system devices 4. Disable (DO NOT UNINSTALL) the Microsoft UAA driver. 5. Reboot in safe mode (This is so that driver will not load) 6. Open device manager 7. Go under system devices 8. Now... uninstall the Microsoft UAA driver. This will also remove the Realtek. 9. Reboot to Windows. 10. Install the Realtek drivers without letting windows install it. If this doesn't di the trick try this: Disable audio controller in normal windows mode 2. Reboot in safe mode 3. Go to device manager 5. Install now Realtek driver and reboot in safe mode again 6. Go to device manager and see driver's version at this time you should have the updated driver. Copyright © Toshiba America Information Systems, Inc. All rights reserved.
OPCFW_CODE
Newbie to Amazon AWS, setup question Background: I'm an ASP.NET developer with not much of experience when it comes to server administration. I researched high and low on Amazon AWS and I think we are going to go with the "Reserved Small Instance". My questions are the following Due to pricing of MS SQL server is too expensive, we are going to use MySQL. Now, do you install Mysql yourself, on the same instance or a different one, on EBS or not EBS? Is there a windows AMI with mysql for free? I can't seem to find any. It seems, if you install MySQL yourself, you will have to handle all the backup, load balancing yourlsef correct? Any tutorials out there that teach you that? and what's your experience with Amazon RDS for a fee? How is the price of RDS working out for you? Given I have very little experiences when it comes to server administration, dumb question. Is there a need for me to order load balancing even if I just buy "Small Instance"? You would need at least two machines to load balance as I understand it right? Or "Small Instance", buy one plan, you can create as many "machine" as you need? A bit confused. Could someone give me an estimate of how much 1GB bandwidth is in terms of traffic, mostly text, non graphic intensive (like Google plus ish)? Each page have a max of 100kb (with compressed javascript and all that) Lastly, I have search functions using Lucene.NET, which stores search indexes as text on hard drive. From what I've read so far, if the instance is gone, your files are gone, so should I store that on EBS? or S3? Thanks a lot for having the patience to read through the load of dumb questions. I really appreciate you taking the time to answer them. 1 . Yes you can set it up yourself or go with their Amazon RDS setup like you mentioned. The downside of setting it up yourself is that you have to manage it all by yourself. There's a bunch of tutorials out there on how to use Amazon RDS. But really all you need to do is go through the console and you can use the website to build the database. You also will need to define the correct security groups. Checkout the amazon developer site for instructions. The price is pretty expensive if your bootstrapping. You would need to set up load balancing and trigger points yourself. However if you use the new ElasticBeanStalk service a lot of this is done for you and you can just add new trigger points on when to scale up and down and the instances will be added and removed from the load balancer accordingly. hmmm don't know. Not sure just based on that but ya you can store files on S3. Thanks a lot imrank1. I didn't know anything about beanstalk service. I'll definitely have to check it out and see how it does. Greatly appreciate it. 1) typically you'd want your db installed on a different server, especially if you're going with a small instance. you may see performance problems running both web and db servers on the same instance. but it depends on your type of app and traffic. 2) http://www.amazon.com/Definitive-Guide-MySQL-5/dp/1590595351/ref=sr_1_1?ie=UTF8&qid=1322325409&sr=8-1 but i'd check out Xeround. i tested it. was dead simple to use. just make sue your ec2 instance is in the same zone as your xeround instance. 3) you'd need load balancing only if you if you're running more than 1 instance. 4) Not sure 5) i don't think you'd be able to read the index files using the s3 api the way you would need to for lucene. you might want to check out these guys: http://websolr.com/ There's Xeround: http://xeround.com/cloud-database-comparison/amazon-rds-feature-comparison/ I read about them on mysqlperformanceblog.com I'm thinking about moving there because I'm not willing to pay $80 a month either.
STACK_EXCHANGE
I adapted an existing code for the use of a I/O expander and it works. As far as I know, the only way to interact with a (this) driver is using the ioctl() command. Trying to use a own function like doSomething() did not work because it could not find this function, also not if I declared it in the header file… (does it matter it is a function inside a class?) - And therefore, why does the ioctl command work, even it is not defined in a head and it is a in-class function in my drivers .cpp code? The ioctl command has three arguments to give over: The first one is the file-pointer (struct file *filp) to my own driver location which I just created, right? Thus the question is, are there any restrictions for the values of the second argument(int cmd). I mean, can I just use any number? I heard there are/can be restrictions, which could cause a bad interaction with other drivers. Is that possible? I would highly appreciate a answer and hope I stated myself clearly enough. Thank you ahead! Hi @Tachsche, I think you’ll need to provide more detail. In general you should try to interact via uORB messages instead of IOCTL, although it’s not clear what’s appropriate in your case without more detail. So I don’t know anything about uORB messages… regarding my ioctl understanding problem: I edited/ enhanced a driver and the only way i knew how to was to use the existing ioctl method. I changed the argument(3rd parameter) and the address (second parameter) of the ioctl command. But the guy who created the driver originally had an adress offset for the second argument. He started like at 0x2800 and restriced the usage up to e.g. 0x28FF. And that was confusing me. Thats why I asked: Do i need to care where my number of the second argument starts? Like if I use another number, could it effect another component/driver? I´m not sure, but I think rather it does not effect it, because my first argument defines the driver (my driver) to use. Only if the ioctl query cannot be processed in my code and it is forwarded to the main ioctl (e.g. linux’s), then it could/will effect other drivers. So once more. For my own driver as 1. parameter, could the number of the second parameter effect any other driver? Does that make it clearer? (PS: I made it run, but I would like to know that to ensure it won’t cause an error/ for future changes) Are there any restrictions about the ioctl function? The programmer doesn’t specify the hardware address,because it’s decided by the cpu.
OPCFW_CODE
is there an sql statement to insert data to one table and getting the data from another table Ok I have a table that have general categories with just name and CatNo. Now what I want to know is when I create a client to move all the data in general categories to another table called categories that have the clientid to associate it with the client. I would appreciate for any help what is your question here??Shravan Addaypally MCP...how to create a new sql table from xml table through coding I have a xml tableProgramatically I have to convert the data to sql table and show the data in the grid view from the sql tableall I have to do by coding onlycan it be possible As far as I know Torque can generate SQL source from an XML schema describing a database structure: http://db.apache.org/torque/releases/torque-3.1.1/generator/ However you need to do that throught coding which is not as simple as it sounds. You may have to transform xml schema to sql schema manually or use xslt. Or you can load xml schema into dataset and create database schema from dataset. Hop...how to create a new table based on the data from a old table if I am doing normalization, I want to delete some rows from an existing table then create a new table base on these data. For exaple I have a talbe like this my statement is select name,id from tp group by name, id; then the output is this is the contents of my new table I want. How can I create this new table with a new name? Try SELECT Distinct <gretp> wrote in message > if I am doing normalization, I want to delete some rows...creating a table column that only takes data from another table. I am trying to create a table that holds info about a user; with the usual columns for firstName, lastName, etc.... no problem creating the table or it's columns, but how can I "restrict" the values of my State column in the 'users' table so that it only accepts values from the 'states' table?ScottSEADont forget to click "Mark as Answer" on the post that helped you!This credits that member, earns you a point and mark your thread as Resolved for the sake of Future Readers. You could create a trigger on the table or a rule?***********************Dinakar NethiLife is short. En...Updating a Table data from Another table using Sql Server 2000 I have a Problem while updating one table data from another table's data using sql server 2000. I have 2 tables named TableA(PID,SID,MinForms) , TableB(PID,SID,MinForms) I need to update TableA with TableB's data using a single query that i have including in a stored procedure. use this type of query update myTable2 set SID= (select top 1 SID from myTable1)Give a man a fish and you feed him for a day. Teach a man to fish and you feed him forever. u can use SELECT column_name(s) INTO newtable [IN externaldatabase] FROM sourcee.g.SELECT * INTO Persons_backup...Creating a Table from another Table. I need to be able to create a temporary table that is exactly like an existing table. In Oracle, the syntax to do this was as follows: CREATE TABLE NEWTABLE AS (SELECT FROM EXISTINGTABLE); This effectively copies the table structure and creates a table called newtable that is a "clone" of the existing table. How can this be accomplished in SQLA, short of writing a stored procedure or a function? Thanks for your response. Consolidated Services Corp SELECT * INTO NewTable FROM OldTable Jaguar Product Team Owen Hebert wrote: > I...How to create SQL Tables from Random XML files and Upload its data to those tables Hi,I was wondring if there is a way to automatically create SQL Tables when an XML file is uploaded and load its data to that SQL table? So here is what I was thinking:1- Create a xsd file from that XML2- Create the SQL Table from that xsd3- Upload the data to that table. Is this possible? I have done something like this before, but the colum data type is Variant which I don't want. using Microsoft.SqlServer.Management.Common; using M...Create new table Hi,I am creating a new table called 'tableA'TableA has 2 columns - columna and columnb..I would like the values of tableA's columna to be the same as tableBs' (a current table) columna...Does that make sense?Can I use a computed column? Will it still work if table b's columna has new data added all the time (will tablea keep up to date?) Thanks,Jon In the sense you want to and fro values to be same from both the tables.I think its a one way possibility but not the two way here is the query take the wizard and follow the steps in the sense 1)in ur BLL def...How to move data from one table to another table w/o affecting the table definition? I have a production server and a development server. On the production server, I have a table (thin a database) that already contains data. On the development server, I created a table (still within another database) and I want that table of the development server to contain the data from the table of the production server. But, both tables don't have the same table definition, meaning in the table of the development server, I have added a new column named ID that I want to use as a primary key column, but I don't have tha...How to select data from a table and insert the selected data in the same table as new row I have a table called Version and its attributes are Version_ID, Project_ID , Hospital_ ID , Date_Created and comments. I want to select the data by Version_ID,Project_ID and Hospital_ID and the selected data is inserted in the same table(Version) as new row . Table: Version (Version_ID(Primary_key), Project_ID(Foreign_Key),Hospital_ID(Foreign_Key),Date_Created,Comments). Iam using Visual Web Developer Express 2005 and SQL Server 2005. Iam doing on asp.net 2.0. Could anyone please send me the code in asp.net 2.0 for the above problem. ...How do you copy table data to another table? I need to copy existing row data from one table into a new row in a different table, both in the same database. Can this be done in a stored procedure where the selected row is passed in as parameter value? There's plenty of ways you can do this.Try INSERT INTO YourTable (.column list) ***********************Dinakar NethiLife is short. Enjoy it.*********************** that easy? thank you. I am getting that 'can't see the forest for the trees' feeling....How to create a table with datarow from another table? Hi, I am trying to display a table with values obtained from a datarow of another table. The purpose of this is because I want to create a table showing a list of records that were updated. (meaning records not updated would not show in table)Usually, I would fill a dataset and just add a new row. However, there is no dataset I would like to fill. I want a blank dataset/ datagrid, and just add rows with fields in it. I figure either:1) fill a blank dataset and add rows into it, or2)create a table and add rows into it. I am not sure 1 can be done, so.....I am guess 2 is more likely. Can...How to create a temporary table from another table Please tell me how to create a temporary table from another table ? give me a example,any advise is appreciate! Please post this question in the ASE.General newsgroup. This group is for EAServer issues. Dave Fish [TeamSybase] On Sat, 2 Mar 2002 23:12:16 -0500, yfz wrote: > Please tell me how to create a temporary table from another table ? >give me a example,any advise is appreciate! ...Create new Table from existing table I'm trying to Create a new Table from existing table in Q/Analyzer. I figured it would be something like this: CREATE TABLE newTable AS(SELECT * FROM OldTable); but i keep getting Server: Msg 156, Level 15, State 1, Line 1Incorrect syntax near the keyword 'AS'. also.. is there another method of doing this, something like INSERT INTO newTable(SELECT * FROM OldTable); and it creates the table ( newTable ) for u if it doenst already exist?? im using sql2000 You can try this one: SELECT * INTO newTable FROM OldTableLimno The easiest method to copy ...
OPCFW_CODE
htaccess | RewriteRule I am migrating a web site from a Windows Server into a Apache and I like to redict some 404 URLs to correct/equivalent URL in my new server. What I like to do is to make the following sorten by using RexEx: Redirect 301 /username/search.asp /newdir/username/ Redirect 301 /username/contact.asp /newdir/username/ Redirect 301 /username/legalmarketing.asp /newdir/username/ Redirect 301 /username/home.asp /newdir/username/ I have try to use the RewriteRule with something like that: RewriteRule ^/username/(search|contact|legalmarketing|home)\.asp$ /newdir/username/ [R=301,L] Bat this is not working. Any better idea please ? Am I doing something wrong ? UPDATE #1 I also have try the RewriteRule ^/username/search.asp$ /newdir/useranme/ [R=301,L] #as well the following RewriteRule ^username/search.asp$ /newdir/useranme/ [R=301,L] but still no chance UPDATE #2 I also have try this tutorial : http://www.askapache.com/htaccess/crazy-advanced-mod_rewrite-tutorial.html and the problem seems to be more complex, due to null values that I am getting from the server. This is the code produced by the tutorial code: Missed These Variables: Array ( [INFO_API_VERSION] => (null) [INFO_AUTH_TYPE] => (null) [INFO_CONTENT_LENGTH] => (null) [INFO_CONTENT_TYPE] => (null) [INFO_DOCUMENT_ROOT] => (null) [INFO_GATEWAY_INTERFACE] => (null) [INFO_HTTPS] => (null) [INFO_HTTP_ACCEPT] => (null) [INFO_HTTP_ACCEPT_CHARSET] => (null) [INFO_HTTP_ACCEPT_ENCODING] => (null) [INFO_HTTP_ACCEPT_LANGUAGE] => (null) [INFO_HTTP_CACHE_CONTROL] => (null) [INFO_HTTP_CONNECTION] => (null) [INFO_HTTP_COOKIE] => (null) [INFO_HTTP_FORWARDED] => (null) [INFO_HTTP_HOST] => (null) [INFO_HTTP_KEEP_ALIVE] => (null) [INFO_HTTP_MOD_SECURITY_MESSAGE] => (null) [INFO_HTTP_PROXY_CONNECTION] => (null) [INFO_HTTP_REFERER] => (null) [INFO_HTTP_USER_AGENT] => (null) [INFO_IS_SUBREQ] => (null) [INFO_ORIG_PATH_INFO] => (null) [INFO_ORIG_PATH_TRANSLATED] => (null) [INFO_ORIG_SCRIPT_FILENAME] => (null) [INFO_ORIG_SCRIPT_NAME] => (null) [INFO_PATH] => (null) [INFO_PATH_INFO] => (null) [INFO_PHP_SELF] => (null) [INFO_QUERY_STRING] => (null) [INFO_REDIRECT_QUERY_STRING] => (null) [INFO_REDIRECT_REMOTE_USER] => (null) [INFO_REDIRECT_STATUS] => (null) [INFO_REDIRECT_URL] => (null) [INFO_REMOTE_ADDR] => (null) [INFO_REMOTE_HOST] => (null) [INFO_REMOTE_IDENT] => (null) [INFO_REMOTE_PORT] => (null) [INFO_REMOTE_USER] => (null) [INFO_REQUEST_FILENAME] => (null) [INFO_REQUEST_METHOD] => (null) [INFO_REQUEST_TIME] => (null) [INFO_REQUEST_URI] => (null) [INFO_SCRIPT_FILENAME] => (null) [INFO_SCRIPT_GROUP] => (null) [INFO_SCRIPT_NAME] => (null) [INFO_SCRIPT_URI] => (null) [INFO_SCRIPT_URL] => (null) [INFO_SCRIPT_USER] => (null) [INFO_SERVER_ADDR] => (null) [INFO_SERVER_ADMIN] => (null) [INFO_SERVER_NAME] => (null) [INFO_SERVER_PORT] => (null) [INFO_SERVER_PROTOCOL] => (null) [INFO_SERVER_SIGNATURE] => (null) [INFO_SERVER_SOFTWARE] => (null) [INFO_THE_REQUEST] => (null) [INFO_TIME] => (null) [INFO_TIME_DAY] => (null) [INFO_TIME_HOUR] => (null) [INFO_TIME_MIN] => (null) [INFO_TIME_MON] => (null) [INFO_TIME_SEC] => (null) [INFO_TIME_WDAY] => (null) [INFO_TIME_YEAR] => (null) [INFO_TZ] => (null) [INFO_UNIQUE_ID] => (null) ) Any further idea please ? Depending on where this code is located on your server, you might have to drop the leading slash / from your search pattern, or even set the RewriteBase. The .htaccess file is located in the root directory of my web site, as well the RewriteBase is set to / And removing the leading slash does nothing? Yes, still does nothing Try something like below RewriteRule ^username/search.asp$ http://www.yourdomainname.com/newdir/useranme/ [R=301,L] This is not what I am looking for as well not solving my issue. What I like to do, is to use an or operator in regex in order to get any of the matched file names as I describe on the example All i wanted to tell you is add your domain name to the new URL lilke RewriteRule ^username/(search|contact|legalmarketing|home)\.asp$ http://www.yourdomainname.com/newdir/username/ [R=301,L]
STACK_EXCHANGE
(Ok, I admit I’ve always wanted to write a post with a title like “Design and Evolution of …” or “Structure and Implementation of …”. :-)) In the last two weeks, I’ve been writing (yet) another Dependency Injection framework for Ruby dubbed Dissident. Based on my experience with Needle (which I used for writing Nukumi2) and quite a deal of inspiration by Java frameworks, especially PicoContainer, I think I made one of the most “rubyish” frameworks available. What is the deal with Dependency Injection? DI tries to solve one of the oldest problems of object-oriented programming: Decoupling objects. Probably every half-serious OO programmer got to know that often you need to pass a fair amount of objects to the classes you instantiate, just because they need them; the class that instantiates doesn’t. All this passing-around is, in the end, unneeded code that wastes time and attracts bugs. Also, what happens if you suddenly need to replace the actual implementations? (Admittedly, it’s not that bad with dynamic languages and duck-typing, but in a static language, I hope you have your interfaces handy. Nevertheless, concentrating instantiation makes your application easier to maintain. Tomorrow your PHB tells you he wants to use that “new logging library” everywhere. It’s really nice if you can do that by changing a single line. ) Currently, there are two popular (and generally considered being good) approaches to this problem: Setter Injection and Constructor Injection. Setter Injection creates new instances and injects the dependencies of them by using setters. Without a DI framework, you would use the class like this in Ruby: class Application attr_writer :logger def do_stuff logger.log "doing stuff" end end a = Application.new a.logger = Logger.new a.do_stuff As you can see, this is a straight-forward way to do DI, and also the default one used in my framework. (Well, almost; see below.) Constructor Injection was popularized by PicoContainer, and is considered more “clean” by many people. class Application def initialize(logger) @logger = logger end def do_stuff @logger.log "doing stuff" end end a = Application.new(Logger.new) a.do_stuff Constructor Injection is available for Dissident too, but not the default mechanism because unlike Java and (at least to an extent) Python, Ruby does not provide mechanisms to access the parameter names and types of methods. Constructor Injection is actually more work to code for in a dynamic language like Ruby, at least in comparison to Setter Injection (count the occurrences of loggerin above classes). As mentioned, Dissident does not implement mere Setter Injection, but an extension of it that is only possible in dynamic languages; I call it Method Injection: Dissident extends the classes that use DI to provide methods that return the requested services. Therefore, the Dissident code to make use of the first example would look like this: class Application inject :logger def do_stuff logger.log "doing stuff" end end class MyContainer < Dissident::Container provide :logger, Logger end Dissident.with MyContainer do a = Application.new a.do_stuff end You can stop goggling now. :-) You probably expected something like that: container = MyContainer.new # mumble mumble a = container.application Not so in Dissident! Dissident tries to completely stay out of your code. Rubyists duck-type class instantiation on #new, and there is no reason to change that when using a DI framework. So, what happens now exactly? The Dissident.with block makes an MyContainer (a plain old Ruby class) the current container. Now, all “dissidentized” classes ( Application here) can access it. The inject line in the class definition of Application defines a “getter” for the logger, which is provided by the container as an instance of In fact, instead of that provide line, you could as well write something along this: class MyContainer < Dissident::Container def logger Logger.new end end …which does in above case exactly the same, but provide allows for some more subtleties. The code as seen is not independent of Dissident, but that can be fixed in three easy ways: class Class; alias_method :inject, :attr_writer; end), and you can use standard Setter Injection, rescue calls to inject(possibly falling back to use Constructor Injection. To make use of Constructor Injection in Dissident, just tell the services you want to pass; in above case you now must let Dissident actually construct the application itself, therefore we need to register it too: class MyContainer < Dissident::Container provide :logger, Logger provide :application, Application, :logger end Dissident.with MyContainer do |container| a = container.application a.do_stuff end As you can see, Constructor Injection is the more “pure” approach, but a bit more work and not as transparent as Method Injection. You can mix both freely when using Dissident, though. Another nice thing Dissident provides are parametrized services, simply define a method in your container, and it will get a multiton with a life-time of the container. This only works with Method Injection. class MyContainer < Dissident::Container def logger(level=:debug) SuperFancyLogger.new(:level => level) end end class Application inject :logger def do_stuff logger.log "This is just for debugging." logger(:alert).log "Core meltdown." end end What happens when you need more than one container? You may, for example, want to use a library that makes use of Dissident while independently using it in your own application too. Dissident solves this by having the classes declare their association with class AGreatLibrary library AGreatLibrary end class AGreatLibraryHelper library AGreatLibrary end Dissident.with MyContainer, AGreatLibrary => AGreatContainer do Application.new # uses MyContainer AGreatLibrary.new # uses AGreatContainer AGreatLibraryHelper.new # uses AGreatContainer, too end If it is likely that AGreatLibrary always uses you can declare this too, then the user doesn’t need to care about it (but still can override manually, of course): class AGreatLibrary library AGreatLibrary default_container AGreatContainer end Now, I showed you the most of the things Dissident can do. And these are probably 90% of the things you’ll ever need when using Dissident. Additional features are included as separate files, I wrote a basic lifecycle management and support for multi-methods, that allow even easier parametrization of services; more about that will follow in a later post. So much about the design of Dissident, now a bit more about the evolution. I don’t think any library or application I ever wrote changed so much without a rewrite. When I started to write Dissident, it was a tiny library that would only do Setter Injection with instance variables. Then, I noticed this approach was too inflexible as it didn’t support parametrized services. First, I used define_method on the singletons the container instantiated, but that’s inefficient, and far too invasive. The next step was to extend them with Modules, first dynamically generated, then named for marshaling purposes. I have to admit it took me a fair time to recognize that I could define the getters directly on the classes. After some more playing and reading about PicoContainer, I decided to add Constructor Injection too; that was fortunately rather easy. But why write a new DI framework at all? There are some prejudices in the Ruby community with respect to that. People say “they make things complicated” and “there are more frameworks than users”. Of course, that may be true—but it shouldn’t be for all of them. Therefore, I decided to make one that’s not complicated, because you barely notice using it (It’s true that use of DI frameworks often significantly changes the code), one that’s easy to pickup, because you can learn it in an afternoon and only need to write a few additional lines of Ruby—no XML or YAML needed, one that actually helps coding, because else it’s a hobble and therefore no fun, one that eases testing, because you can mock the services easily (don’t use a container at all, or simply inject mocks), one that feels like Ruby, because you should never tangle in bad ports of Java libraries; in the end, I decided to make one that I personally like and want to use, because there is no point in making libraries you don’t use on your own. Still, Dissident is no silver bullet—there is no panacea. If your design is broken, the best libraries can’t change that. But I think that when you use Dissident, and use it as it was meant to be, it can help you show the rough edges of your design earlier than when you sit over half a dozen napkins, desperately trying to untangle your class relationships. (You’ll quickly notice when your container definitions get ugly.) Another thing among the reasons I wrote my own Dependency Injection framework was the size of the existing ones. Needle with its 1200 LOC doesn’t count as “lightweight” in my opinion anymore—it already is in the mid-size non-invasive team. (It is very good, though. Use it if you think Dissident is not enough or too extreme/weird/fancy/magic/inflexible for you.) Dissident on the other side is one 200 LOC file for the core right now, and maybe 100 LOC for the additional features. The core is unlikely to grow much in future (one thing that probably needs a bunch of code is makeing everything thread-safe), it does basically everything that is needed. Therefore, it does no harm to just include Dissident in your package, that’s one dependency less and you have everything you need. And that’s the crux with DI, you only know you would have needed it when it’s too late. It’s good if you have a very easy framework then; especially if it’s one that you actually want to use. NP: Dan Bern—Crossroads
OPCFW_CODE
implicit noneThis statement enforces type checking. Otherwise implicit types are assumed, i.e., undefined variables whose names begin with i, j, k, l, m, and n are assumed integers, whereas all the other undefined variables are assumed real. We don't want that, because it can easily lead to programming errors. node. A data of type nodewill be a node in a linked list. In this case our derived type nodehas 4 components, of which 3 are reals and one is a pointer that will point to the next occurrence, somewhere in the memory of the computer system, of a data of this type - thus forming a list of elements linked with pointers. nodeis recursive - this is always the case with linked lists and trees. type(node), pointer :: list, firstWe must always keep the address of the first element of a linked list stored safely somewhere, because if we lose that, the whole data set will become dereferenced, i.e., lost. 'old': this means that the file must already exist - it is possible to open a new file for reading, although it is hard to tell what one can read from such a file) we allocate the first object of type nodeand set its pointer component to point nowhere with the firstpoint to the same object: first => list; count = 0Observe the new symbol =>, which means that firsthas been assigned an address of object that is now referenced by list. Once we have allocated listis no longer a pointer. It is now an object, which may become a target for another pointer, in this case, first = listInstead you have to do: first => listThe meaning of this operation is a pointer points to ( ) an object list%x, list%y, list%sigmathe three floating point number slots of the object that is currently referred to by the name end=100condition fires up and the program jumps to label 100. There is no other portable way to process end of file condition in Fortran. list%nextso that we can locate it by marching along the list. At the same time we redirect the listpointer itself so that now it points to list%next, and within this newly created object we nullify the pointer slot. We need to do that, so that when the list ends we have some means of ascertaining that. allocate(x(count)); allocate(y(count)); allocate(sigma(count)); listto point back at the first object, and march along the list. This time hitting a pointer that is no associated with any object terminates the march. That's why we've been always careful to nullify the pointer in a newly allocated object.
OPCFW_CODE
How to configure sendmail (or postfix) to send confirmation emails using webmin? have a centos 5.5 64 bit xen vps. I have a php script that automatically sends confirmation emails for people who sign up, it's not sending it right now, I've been told to install webmin, and then install sendmail or postfix and configure it to send emails I installed webmin, installed sendmail, and now what? if you know how to configure postfix then I'll unintall sendmail and install postfix. I just want to send emails automatically, the confirmation mail, the welcome email and the goodbye email, and the reset password email, the email that i want to use is <EMAIL_ADDRESS>I do not want to have an inbox i can use google apps email service to do so, just want to send automated emails NOTE I can do it via ssh, without webmin, just wanna know how, any tutorial or explanation would be so appreciated. if you know how to configure another software similar to posftix and sendmail, I have no problem using it rather than using those 2. Basically I don't care what email server I use, as long as the job gets done You can go to Servers > Sendmail Mail Server. (If you don't see it, click Refresh Modules toward the bottom.) In most cases you shouldn't need much configuration. PHP's mail() function should work once sendmail is installed and running. If it still doesn't work, could you: describe how your application sends mail (and/or post the code) describe what you see on Webmin's sendmail page esp. errors if there are any send a message to yourself from the command line using the mail command and describe the result It's nothing easier than installing Postfix from command line. Just install it with your favorite package manager yum install postfix. After that you configure it as described in the basic configuration readme. If you think that this is too hard and not easy enough you should not install a mail server. Not knowing what you do will probably exposes the mail server to the public and will harm other innocent people (sending Spam). On the other hand I do not understand is why people don't use a search engine for these basics? First search hit reveals this complete HOWTO: http://wiki.centos.org/HowTos/postfix (This is 10 seconds work instead of 10 minutes writing this question). i can install easily (i have webmin remember? and I can google), that's not my question, my question is similar to this one http://bit.ly/r86vQT but didn't want you to read hundreds of line of code, so didn't talk about it,so i opened a ticket on clip-bucket, lets see what they say @Eli so have you read this? This is a 5 line configuration to do it. thank you sir, I will remove sendmail, install postfix and follow the document :D
STACK_EXCHANGE
I am literally swimming in tech stuff, domain management and site development right now, and yesterday’s WordPress upgrade couldn’t have come at a worse time for me. Anyhow one thing I thought I’d share with you are the changes I’ve recently made with how I manage all the emails across dozens of domains. I think I’m just avoiding my disaster pile of work that’s waiting for me ;). But I had to hunker down and nuke my past email setup (email client was wonky) because it wasn’t as efficient as it could be. First here’s a bit of backstory: - I don’t allow my domain emails to pass through a third party (like gmail, yahoo, whatevs). - I’m a Microsoft fan, but not Outlook. To me–it’s a magnet for virus disasters IMO. - I’ve used a few different email clients through the years, namely Pegasus and Foxmail (can’t find the non-chinese link). Too limited though when things got complicated (ie. bunch of email addies per domain) and some features just aren’t there that I’d like. - I have a couple (or a few–depending on the domain) email addies per domain. Such as one for site contact form, one to use when making comments on other blogs, one for article/directory/whatever submissions, etc. The method to my madness is that when one gets picked up in the spam train (and when using them ‘out there’ for whatever reason–they will), I can just delete it and start with a fresh one. This keeps my main email addresses mostly out of the spam loop. Since I am working with several live domains, with each domain having at least two email addresses, and download all the emails locally to my computer every few minutes rather than have an online third party handle them–things got a bit heavy and clunky. I have been using Foxmail for a few years now, but I seemed to have broke it with all my accounts and the megs of stored emails (it constantly drops the stored passwords). I recently made the switch to Thunderbird and although I still need to figure out a few tweaks for personal preference, overall this email client is “The One” for me! First, a new trick I figured out for domain emails: And this one may make your eyes roll since it’s *always* been available to do, I just never picked up on it. In the past I always created a new account in cPanel for each email address. Hello! Just create one email address and then go into “Forwarders” and create all the aliases you want to send and receive mail for and then forward to the one main email address. You can forward all emails from one domain to another domain’s email account too–I haven’t looked into that at all though since it’s not something I’m interested in doing atm. What Happens: It saves your email client from logging in and checking dozens (maybe hundreds) of email accounts. It just has to log into the main account for the domain and grab all the emails including ones for the aliases. Here’s where things get interesting with Thunderbird: You can create an account for the main email, then add a bunch of Identities to that account. What happens then is that you can create emails or respond with the “email alias” you are working with instead of your main email account showing up in the headers as “to” and “from”. I won’t get into the details of how to do that, it’s laid out very well here: Web Worker DailyThunderbird
OPCFW_CODE
package core import ( "fmt" "github.com/golang/protobuf/ptypes" mh "github.com/multiformats/go-multihash" "github.com/textileio/go-textile/pb" ) // announce creates an outgoing announce block func (t *Thread) annouce(msg *pb.ThreadAnnounce) (mh.Multihash, error) { t.mux.Lock() defer t.mux.Unlock() if !t.readable(t.config.Account.Address) { return nil, ErrNotReadable } if msg == nil { msg = &pb.ThreadAnnounce{} } if msg.Peer == nil { peer := t.datastore.Peers().Get(t.node().Identity.Pretty()) if peer == nil { return nil, fmt.Errorf("unable to announce, no peer for self") } msg.Peer = peer } // do not annouce for other account peers if msg.Peer.Address == t.account.Address() && msg.Peer.Id != t.node().Identity.Pretty() { return nil, nil } res, err := t.commitBlock(msg, pb.Block_ANNOUNCE, true, nil) if err != nil { return nil, err } err = t.indexBlock(&pb.Block{ Id: res.hash.B58String(), Thread: t.Id, Author: res.header.Author, Type: pb.Block_ANNOUNCE, Date: res.header.Date, Status: pb.Block_QUEUED, }, false) if err != nil { return nil, err } log.Debugf("added ANNOUNCE to %s: %s", t.Id, res.hash.B58String()) return res.hash, nil } // handleAnnounceBlock handles an incoming announce block func (t *Thread) handleAnnounceBlock(block *pb.ThreadBlock) (handleResult, error) { var res handleResult msg := new(pb.ThreadAnnounce) err := ptypes.UnmarshalAny(block.Payload, msg) if err != nil { return res, err } if !t.readable(t.config.Account.Address) { return res, ErrNotReadable } if !t.readable(block.Header.Address) { return res, ErrNotReadable } // unless this is our account thread, announce's peer _must_ match the sender if msg.Peer != nil { if t.Id != t.config.Account.Thread && msg.Peer.Id != block.Header.Author { return res, ErrInvalidThreadBlock } } // only initiators can change a thread's name if msg.Name != "" { if t.initiator != block.Header.Address { return res, ErrInvalidThreadBlock } } // update author info if msg.Peer != nil && msg.Peer.Id != t.node().Identity.Pretty() { if t.Id == t.config.Account.Thread && msg.Peer.Id != block.Header.Author { err = t.addPeer(msg.Peer) } else { err = t.addOrUpdatePeer(msg.Peer, false) } if err != nil { return res, err } } // update thread name if msg.Name != "" { t.Name = msg.Name err = t.datastore.Threads().UpdateName(t.Id, msg.Name) if err != nil { return res, err } } return res, nil }
STACK_EDU
Filtering and Stochastic Control Contributed by Ljubo Vlacic. A comprehensive survey of linear filtering theory can be found in - H. W. Bode and C. E. Shannon proposed the solution to the problem of prediction and smoothing (). A modern account of the solution can be found in and more detailed treatment of the ideas are presented in - R. E. Kalman (,,) made explicit that an effective solution to the Wiener-Hopf equation using method of spectral factorization () could be obtained when the continuous process had a rational spectral density. - Stratanovich derived the conditional density equation using the so-called Stratanovich calculus (). - The theory of optimal stochastic control in the fully observable case is quite similar to that of non-linear filtering in connection with the linear quadratic stochastic control problem (). Early works in this area are due to Howard (), Florentin (), and Fleming (); See also . - Inspired by the development of Dynamic Programming by Bellman () and the ideas of Caratheodory () related to Hamilton-Jacobi Theory, the development of optimal control of nonlinear dynamical systems took place (, ), see , , for further details of the ideas. - The solution to quadratic cost optimal control for linear stochastic dynamical systems was provided by Florentin (, ), by Joseph in discrete-time (), and by Kushner (). The definitive treatment of the problem was proposed by Wonham (), see also . - The partially observable stochastic control problem treated by Florentin (), Davis and Varayia () and Fleming and Pardouz (). Detailed discussions can be found in and the - For a good discussion on the distinction between open-loop stochastic control and feedback control see . - Non-linear filters are almost always infinite dimensional and there are only a few known examples where the filter is known to be a finite dimension. The Kalman filter is an example and the other finite-state cases are first discussed in and also . - A difficulty is that one of the fundamental equations of non-linear filtering turns out to be a non-linear stochastic partial differential equation (). Zakai (), Duncan (), and Mortensen () proposed alternative solutions to the above difficulty which involves a linear stochastic differential equation. - Giransov introduced the idea of measure transformation in stochastic differential equation, see , , and the references therein for details. - The earlier ideas of nonlinear filtering were developed and introduced by Forest and Kailath (), and in definitive form by Fujisaki, Kallianpur, Kunita (). - Bobrovsky and Zakai proposed a method for obtaining lower bounds on the mean-squared error (). - As an attempt to address some of the issues with non-linear filtering, pathwise non-linear filtering was considered where the filter depends continuously on the output (, ). - The Linear Quadratic Gaussian methodology and optimal non-linear stochastic control have found a wide variety of applications in aerospace, multi-variable control design systems, finance, etc. (, ).
OPCFW_CODE
How do I compare two voice samples on iOS? First of all I'd like to state that my question is not per say about the "classic" definition of voice recognition. What we are trying to do is somewhat different, in the sense of: User records his command Later, when the user will speak pre-recorded command, a certain action will occur. For example, I record a voice command for calling my mom, so I click on her and say "Mom". Then when I use the program and say "Mom", it will automatically call her. How would I perform the comparison of a spoken command to a saved voice sample? EDIT: We have no need for any "text-to-speech" abilities, solely a comparison of sound signals. Obviously we're looking for some sort of a off-the-shelf product or framework. Like I said, how is it possible to achieve what I've asked :) Just to clear this issue up, we have no need for any sort of "Text to speech" or anything of the sort, we're looking for a relatively simple framework that can compare 2 sound signals and see if they are "the same". This way even non English speaking people can use this program. Have you found an valid answer for this question? One way this is done for music recognition is to take a time sequence of frequency spectrums (time windowed STFT FFTs) for the two sounds in question, map the locations of the frequency peaks over the time axis, and cross-correlate the two 2D time-frequency peak mappings for a match. This is far more robust than just cross-correlating the 2 sound samples, as the peaks change far less than all the spectral "cruft" between the spectral peaks. This method will work better if the rate of the two utterances and their pitch haven't changed too much. In iOS 4.x, you can use the Accelerate framework for the FFTs and maybe the 2D cross correlations as well. I think you'd have to perform some sort of cross correlation to determine how similar these two signals are. (Assuming it'll be the same user that is speaking ofcourse). I'm just typing this answer out to see if it helps, but I'd wait for a better answer from someone else though. My signal processing skills are close to zero. Cross correlation seems like what we need for the project, as we want it to be universal (and not just for English speaking customers) Try using a third-party library, like OpenEars for iOS applications. You could have users record a voice sample and save it as translated text, or just let them enter text for recognition. I don't even need to translate said voice command into text, I simply want to store said command, and later compare it. No, you really need voice recognition. Comparing sounds for "equality" does not take into account any of the many ways the second recorded sample could differ from the first. Car drives by in the background? User pauses slightly longer between words? Or stutters? Be forgiving to your users - they're human, and not capable of producing the exact same sound twice. I'm not sure if your question is about the DSP or how to do it on the iPhone. If it is the latter I would start with the Speak Here project that Apple provides. That way you already have the interface to record the voice to a file done. It will save you a lot of trouble. I'm using Visqol for this purpose. The docs say it works best with a short sample, ideally 5-10 sec.You also need to prepare the files in terms of sample rate and they need to be .wav files. You can easily convert your files to the desired format with ffmpeg library. https://github.com/google/visqol
STACK_EXCHANGE
feat(host,provider-sdk): support provider config updates Feature or Problem This PR introduces a scheme for supporting provider config updates by using the CONFIGDATA_<lattice> KV bucket. When a provider makes a config bundle, it picks an ID (UUID v7, could add host ID) When the config bundle the host is maintaining updates, we push the value to the key in the KV bucket The host passes the key relevant to the config bundle to the provider at startup When the provider starts, provider-sdk listens independently on NATS for updates to the relevant key, and observes config updates that match it's bundle. The provider can implement on_config_update() to respond to config updates. Resolves https://github.com/wasmCloud/wasmCloud/issues/1648 Related Issues Release Information Consumer Impact Testing Unit Test(s) Acceptance or Integration Manual Verification Hey @brooksmtownsend I definitely get those concerns! We currently have Providers doing a bit of connecting to NATS themselves (and there's nothing to stop them from doing so), so I'm not sure I quite see the difference (maybe Jetstream is a much more fundamental functionality than regular NATS connections?) but I definitely think that we could formalize it a different way. I think that in the future WASI providers will either: Get limited access to sockets Have to use an explicit interface that is filled in/triggered by the host upon updates/etc In both cases, the changes to provider-sdk to move from reading Jetstream directly to either allowing access to jetstream (i.e. the provider is still doing the network access), to doing the listening on behalf of the WASI provider isn't too disrupted by this current implementation. That said, I do see your concern! I think the problem here is that we've done our best to sort of... skip formalizing a host <-> provider communication mechanism, we've left it to NATS (which we've benefitted from)... What do you think would be a good communication process between hosts and providers that could work for this use case? I was avoiding regular NATS listen/send because chunking may be required, and we don't really have any other mechanisms AFAICT. Hey @vados-cosmonic I think I get where you're coming from, but I also think some context might be helpful here along with some commentary on some of the reasoning. So @brooksmtownsend brought up ConfigBundle because I explicitly designed the bundle.changed().await method to enable the host to watch for config changes and then send it over a new NATS topic to a provider. This matches how we deliver everything else to a provider so we are following the same pattern. With that in mind, a couple of notes: I think the problem here is that we've done our best to sort of... skip formalizing a host <-> provider communication mechanism, we've left it to NATS (which we've benefitted from)... The formalization has been the provider SDK, which by rule has used a NATS topic API. However, this goes back to the same rationale about why we removed putting links directly into the bucket from the ctl interface and changed it back to a topic. Acknowledging that, yes, NATS KV is just a stream of messages under the hood, you would never have something that uses an API elsewhere connect directly to a database and receive values from it. Case in point of this: if something besides the host was relying on reading link definitions from the KV store, that is now making your database an API. We're going to have to change how those are structured soon because they cause a race condition. If something was relying on those structures, then they would now be broken. This would greatly expand the work needed for backwards compatibility with the host. What do you think would be a good communication process between hosts and providers that could work for this use case? The last point leads me to the answer to this question: It should be over NATS as part of the "API contract" (which as you noted, we could probably formalize better) that let's a provider listen for incoming config updates that it can then do something with. I think this is a much cleaner API for two reasons: First is that where the data is coming from and how it is shaped in storage is decoupled from what the API looks like. Second, if we have a bug in how things are being transmitted (or changes in storage) we don't have to recompile all providers with a new SDK version. The host can change it and since it is an API, the SDK just goes on consuming it. I was avoiding regular NATS listen/send because chunking may be required, and we don't really have any other mechanisms AFAICT. This was an interesting point I hadn't thought of before, but I don't think it is a problem right now. This would mean that your config is over 1MB. We probably should be setting a max size on the config bucket when we set it up (likely 1MB), so we should open an issue for that. However, I don't think this is a problem in the short term Thanks @thomastaylor312 for the deeper explanation -- I think I see what ya'll were expecting the solution to look like with the current API conventions. As far as the other stuff, that all sounds reasonable -- I'll refactor this PR to use a new NATS topic. This was an interesting point I hadn't thought of before, but I don't think it is a problem right now. This would mean that your config is over 1MB. We probably should be setting a max size on the config bucket when we set it up (likely 1MB), so we should open an issue for that. However, I don't think this is a problem in the short term Well this certainly solves my reason to use KV directly -- will do this, and we can cross that bridge when someone tries to send too much config data.
GITHUB_ARCHIVE
The world of extended reality is evolving at an incredible pace. In the last couple of years, we’ve witnessed an acceleration of all things immersive, particularly since the pandemic of 2020. Microsoft Mesh is just one example of how our XR world is changing.Created on the Microsoft Azure platform, Microsoft Mesh is a new environment where developers can build multiuser, immersive, and cross-platform apps for mixed reality. The Mesh landscape allows users to collaborate and innovate in an immersive way, regardless of where they are. It’s also an incredible environment for event building.So, what can Microsoft Mesh actually do? That’s what we’re here to discover with this introductory review of the Microsoft Mesh ecosystem. You may also be interested in Microsoft Mesh is a platform enabling shared presence and unique experiences. Built for the future of MR, the solution allows developers to build environments where users can engage with eye contact, facial expressions, and gestures, regardless of where they are. There’s support for 3D content and virtual experiences mixed with real-world interactions. Mesh is also available on a host of devices, from VR headsets and PCs to smartphones and the HoloLens2According to Microsoft, the Mesh space will allow companies bringing teams together in a distributed environment to rediscover the importance of presence. Users can create avatars or project themselves into an environment with “holoportation.” Features include: Microsoft Mesh is a truly unique experience in the XR environment right now. Promising a new kind of user experience where users can collaborate through futuristic holograms with spatial sound and latency-free rendering, Mesh is like nothing else on the market. The applications you can build with the environment can cover everything from spatial maps to photorealistic holoportation, without the need to compromise on performance and security. Microsoft notes the Mesh environment is suitable for a range of use cases, from training team members with unique experiences wherever they are, to connecting people around the world. Specialists can share perspectives and documents in real-time. Teams can design projects together using all kinds of 3D maps and technology. Whether physically present or available as a hologram, colleagues will have access to content they can interact with in real-time.For extra protection during crucial meeting moments, Microsoft builds state-of-the-art security and privacy features into all Mesh experiences and applications. There’s also a handy app for HoloLens where you can connect with colleagues and co-create, annotating content which you can save between sessions. Some of the biggest benefits of Microsoft Mesh include: A range of mesh-enabled apps builds on top of the development platform for Mesh too. Options like the HoloLens 2 Mesh app, and the AltspaceVR solution with new enterprise features are excellent ways for companies to further enhance their investment into a newly extended reality. Microsoft has thoroughly established itself as one of the major market leaders in the extended reality space. Offering a unique insight into the future of collaboration, the Microsoft Mesh ecosystem allows companies and developers to create spaces like never before. With Mesh, companies can easily connect their colleagues from across the globe with immersive environments, holographic presence, and 3D models.Whether you’re looking for a better way for your team member to work together on creative ideas, or you just want to reduce the need for cross-country and global travel, Microsoft Mesh has a lot to offer. The Mesh space is constantly evolving too, with countless new apps and features to explore all the time. Subscribe now to our YouTube channel Subscribe now to our Facebook Page Subscribe now to our twitter page Love the realm of virtual reality and augmented reality? Are you over 18? Want to make money right now from your PC or smartphone from virtual and augmented reality? Sign up and you will receive an offer from us you can not refuse.
OPCFW_CODE
On Tuesday, November 15, the “Optimization and neural networks” workshop of the DSAIDIS chair was held. Permanent members and PhD students presented their research work. [EN] I will present the ADAM algorithm, which is a famous stochastic gradient method with adaptive learning rate. It is based on exponential moving averages of the stochastic gradients and their squares in order to estimate the first and second moments. Then I will explain the main ideas of its convergence proof in the case of a convex objective function. The challenges are the following: 1) the estimation of the first moment is biased; 2) the learning rate is a random variable. They are solved by finding terms that telescope almost surely and by using the fact that learning rate is small when the gradient estimate is noisy. [EN] In this talk, we revisit the tuning of the spectrogram window length, making the window length a continuous parameter optimizable by gradient descent instead of an empirically tuned integer-valued hyperparameter.We first define two differentiable versions of the STFT w.r.t. the window length, in the case where local bins centers are fixed and independent of the window length parameter, and in the more difficult case where the window length affects the position and number of bins. We then present the smooth optimization of the window length with any standard loss function. We show that this optimization can be of interest not only for any neural network-based inference system, but also for any STFT-based signal processing algorithm. We also show that the window length can not only be fixed and learned offline, but also be adaptive and optimized on the fly. The contribution is mainly theoretical for the moment but the approach is very general and will have a large-scale application in several fields. [EN] Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters which, it is said, “had won at the lottery of initialization”.Such a discovery has a potentially-high impact, from theory to applications in deep learning, especially from power consumption perspectives. However, all the “efficient” methods proposed do not match the state-of-the-art performance with high sparsity enforced, and rely on unstructured sparsely connected models, which notoriously introduce overheads when using the most common deep learning libraries. In this talk, a background on the lottery ticket hypothesis will be provided with some of the approaches attempting to tackle the problem of efficiently identifying the parameters winning at the lottery of initialization, and two recent works will be presented. The first, which has been presented at ICIP 2022 as oral, investigates the reasons for which an efficient algorithm in this context is hard to design, suggesting possible research directions towards efficient training. The second, which will be presented at NeurIPS 2022, implements an automatic method to assess, at training time, which sub-graph in the neural networks does not need further training (hence, no back-propagation and gradient computation is necessary for it, saving computation). The day ended with a discussion on big data and frugal AI.
OPCFW_CODE
Should "among" in John 1:14 really be translated "within"? I’ve heard that in John 1:14 the word “among” was wrongly translated from its Greek origin and that it truly meant “within”. Can you shed some light on that please? And the Word was made flesh, and dwelt among us, (and we beheld his glory, the glory as of the only begotten of the Father,) full of grace and truth. [John 1:14 KJV] What you have heard is just an opinion, the Q lacks any basic search and study, and is a poor quality for here. See biblehub interlinear for the lexicon/dictionary of words you want to check. “and dwelt among us” is the correct translation of «καὶ ἐσκήνωσεν ἐν ἡμῖν» for the simple fact that the author states, “and we beheld his glory” («καὶ ἐθεασάμεθα τὴν δόξαν αὐτοῦ») which would not be possible if the Word dwelt within (rather than among) the author and his companions. The verb ἐθεασάμεθα (lemma θεάομαι) is referring to seeing something with the eyes. The author is alluding to Zech. 2, in particular 2:10, where Yahveh states, “I come, and I will dwell in your midst”2 and 2:11, “And I will dwell in your midst, and you shall know that Yahveh of hosts sent me to you.”3, 4 Footnotes         1 Thayer, p. 284         2 LXX: «ἐγὼ ἔρχομαι καὶ κατασκηνώσω ἐν μέσῳ σου»         3 LXX: «καὶ κατασκηνώσουσιν ἐν μέσῳ σου»         4 also, cf. Eze. 37:27 References Wilke, Christian Gottlob. A Greek-English Lexicon of the New Testament: Being Grimm Wilke’s Clavis Novi Testamenti. Trans. Thayer, Joseph Henry. Ed. Grimm, Carl Ludwig Wilibald. Rev. ed. New York: American Book, 1889. Very good answer. +1. Also, "within" is a lot harder than "among" to reconcile with "was made flesh". And of course, "was made flesh" is quite consistent with beholding his glory. In addition to Der Ubermensch's argument, John 1:14 refers to the incarnation of Christ. The indwelling of the Holy Spirit is in the future. The verb ἐσκήνωσεν is aorist, not perfect or present tense. Thus, it implies something that is no longer. Christ is no longer here in the flesh. The meaning that Christ was in us, but no longer is doesn't fit the context. And I will ask the Father, and he will give [δώσει] you another Helper, to be with you forever, 17 even the Spirit of truth, whom the world cannot receive, because it neither sees him nor knows him. You know him, for he dwells with you and will be [ἔσται] in you. 18 “I will not leave you as orphans; I will come to you. 19 Yet a little while and the world will see me no more, but you will see me. Because I live, you also will live. 20 In that day you will know that I am in my Father, and you in me, and I in you. (John 14:16–20, ESV) Nevertheless, I tell you the truth: it is to your advantage that I go away, for if I do not go away, the Helper will not come [ἐλεύσεται] to you. But if I go, I will send [πέμψω] him to you. (John 16:7, ESV) Summary From a New Testament perspective there are three possible conditions John could be describing: The Incarnation: Jesus living among the disciples before His death and resurrection. The Resurrected Christ: Jesus living among the disciples after His death and resurrection. The Church: The body of Christ after His ascension. The are positives and negative arguments to each. The Incarnation The vast majority of scholars understand John to be describing the earthly existence of Jesus before His death and resurrection. Hence, the Incarnation. Jesus taking on human form. The main argument in favor of this understanding, is it agrees with the historical reality and agrees with the term, σάρξ, flesh. One factor arguing against this understanding is the use of δόξα, glory, which is repeated and is usually used to describe the resurrected Christ: John 7:37-39 (ESV): 37 On the last day of the feast, the great day, Jesus stood up and cried out, “If anyone thirsts, let him come to me and drink. 38 Whoever believes in me, as the Scripture has said, ‘Out of his heart will flow rivers of living water.’” 39 Now this he said about the Spirit, whom those who believed in him were to receive, for as yet the Spirit had not been given, because Jesus was not yet glorified. It is true John speaks of the disciples seeing Jesus' glory in Cana after turning water to wine, but Jesus Himself speaks of what would have to be considered as greater glory as a result of His crucifixion: John 17:4-5: 4 I glorified you on earth, having accomplished the work that you gave me to do. 5 And now, Father, glorify me in your own presence with the glory that I had with you before the world existed. The glory at Cana would pale in comparison to that of the resurrected Christ. The restoration of glory could explain why glory is repeated: John 1:14: And the Word became flesh and dwelt among us, and we have seen his glory, glory as of the only Son from the Father, full of grace and truth. John saw His earthly glory, His earthly existence, and the glory of the only begotten Son from the Father, that is the glory of the resurrected Christ, with the glory He had from before the world existed. Another factor arguing against understanding the Incarnation is the chiasmus in the literary structure of the Prologue which Marie-Émile Boismard calls construction by envelopment.1 The Prologue seems thus to describe a parabola, the base of which touches the earth and the two sides of which are lost in God's infinity. In the course of this double movement, descending and ascending, we meet the same symmetrical landmarks, the most noticeable being the mention of the testimony the Baptist bears to Christ (vv.6-8, 15).2 Boismard diagrams the chiasmus as a parabola:3 The Word With God is Sent | The Word Returns To The Father -------------------------------------------------------------- (a) The Word 1-2 ● | ● 18 The Son in (a') with God. | the Father (b) His role of 3 ● | ● 17 Role of re- (b') creation | creation (c) Gift to men 4-5 ● | ● 16 Gift to men (c') (d) Witness of J-B 6-8 ● | ● 15 Witness of J-B (d') (e) The coming of the 9-11 ● | ● 14 The Incarnation (e') Word into the World ● (12-13) (f) By the Incarnate Word we become children of God If the individual points are to be understood chronologically, the incarnation is found on the left side of the structure. The right side is composed of points following the incarnation: the resurrected Christ. Resurrected Christ The reason for understanding John as referring to the resurrected Christ is the use of glory as described above. This also hints at John 21, a detailed encounter with the resurrected Christ including a miraculous catch and meal and the restoration of Peter. It also agrees with the historical reality. John certainly did see His glory before and after His crucifixion and resurrection. In addition, this is in agreement with the literary structure of the Prologue, if it is understood in the manner Boismard diagrams. The negative to this understanding is the traditional point of view as the Incarnation, and the reference to the disciples seeing His glory before the crucifixion. The Church If John is describing something other than the Incarnation, then the glory of the resurrected Christ can also be referring to children of God which is the Church: Romans 8:21: that the creation itself will be set free from its bondage to corruption and obtain the freedom of the glory of the children of God. 2 Thessalonians 2:14: To this he called you through our gospel, so that you may obtain the glory of our Lord Jesus Christ. An argument against this is the Church is called the body of Christ, not His flesh. Conclusion John often makes statements or uses words with more than one meaning. When considered fully, often more than one meaning is correct and likely intentional. The Prologue is known for two such terms, καταλαμβάνω (verse 5), overcome or comprehend and ἐξηγέομαι (verse 18) to be a leader or to make known. In each both meanings are correct. Darkness does not comprehend or overcome the light. The Son makes known and leads all to the Father. Therefore, verse 14 is probably intended to describe everything John saw. Specifically it refers to the resurrected Christ which must include what is described before His death. It must also include the Church, the glory of Christ bringing Gentile and Jew together. If this is so, then what was initial, the Word which dwelt among us later became the Word which dwelt within and in us. In his answer, Perry Webb says: The indwelling of the Holy Spirit is in the future. The verb ἐσκήνωσεν is aorist, not perfect or present tense. Thus, it implies something that is no longer. Christ is no longer here in the flesh. This correctly describes the Church. Christ in the flesh is no longer; but His body, the Church remains. So at the present, the Word who became flesh is within us, if in fact it is in us. Among us is not wrong, but it is incomplete. After John experienced the Word which dwelt among the disciples, he experienced the Word more personally as dwelling in him and then within those who believed Jesus is the Christ, the Son of God and had life in His name. 1. Marie-Émile Boismard, O.P. St. John's Prologue, translated by Carisbrooke Dominicans, Newman Press, 1957, p. 79 2. Ibid., p. 73 3. Ibid., p. 80
STACK_EXCHANGE
You'll have an opportunity to apply your practical expertise as Section of a very successful industrial placement scheme. Global students can implement × We love our instructors, and so will you. We search for proven experience in addition to a humorousness, and that's In advance of we put them through forty several hours of coaching! "This helped quite a bit. I'd missed university and failed to understand what was going on, so I'm happy I had been directed here!" —Kristen In your closing 12 months You furthermore may undertake a personal project which integrates much of your get the job done you may have analyzed in prior several years. The way you are assessed You show up at a combination of lectures and functional sessions for each module. Lectures focus on educating the rules while functional sessions help you set these principles into follow in reason crafted labs. Furthermore, it enabled Superior analyze with the thoughts, and mapping from the human genome became doable With all the Human Genome Project. Dispersed computing projects like Folding@property examine protein folding. The relationship in between computer science and software package engineering is really a contentious concern, which is even further muddied by disputes in excess of what the phrase "program engineering" usually means, And just how computer science is outlined. David Parnas, getting a cue from the connection amongst other engineering and science disciplines, has claimed that the principal emphasis of computer science is learning the Qualities of computation in general, though the principal emphasis of application engineering is the look of distinct computations to realize sensible targets, making The 2 separate but complementary disciplines. Keep in mind I went overboard and have playing cards covering everything from assembly language and Python trivia to equipment Discovering and data. It truly is way an excessive amount of for what's essential. Examinations are accustomed to assess your instant reaction to a set of tiny or medium unseen problems ; Research what sort of facial expressions babies can mimic and how young they start to imitate them. (A different baby brother or sister would be helpful) During this module you can initially be released to many of An important information buildings Utilized in the look and implementation of computer program his response and demonstrated how these are generally applied making use of Java. You'll then discover how to analyse the necessities of algorithm resources to allow you to provide a audio basis for goal choice when dealing with competing algorithms. Realistic modules incorporate supervised laboratories To place into apply rules covered in supporting lectures; Benefits Do you need to broaden your tutoring business enterprise throughout the country? and even throughout the world? Homeworkhelp.com helps you Develop your own On the web Tutoring Heart their explanation without technological hassles. This module aims to deliver college students with business enterprise and company ideas to permit them to analyse and Consider business procedures, principles, theories and frameworks and their marriage for the strategic and operational management of an company or simply a project.
OPCFW_CODE
Knowledge Structures and Constructivism Yesterday I "read" a psychology book about memes (Thought Contagion by Aaron Lynch, 1996). Total number of pages 192. Time spent: about an hour. I did not read every word. First, I had a look at the table of contents. The book contained 7 chapters plus a short epilogue chapter at the end. Each chapter had a number of phrases corresponding to a series of sub-headings (between 10 and 20). I began reading the first chapter and part of the second when I first sensed the actual structure of the book. I then opened a software package called MindGenius that I use for creating mind maps of content that I am playing with. I quickly had a series of branches for the first chapter and part of the second. At that point I realized that for my purposes, I did not need to read all of the book. Chapters 2 through 6 each consisted of a specific example within different disciplines. The general ideas were encapsulated within the first chapter, chapter 7, and the eplogue. I quickly completed a chart for the entire book that reflected this structure. Thus the chart that I created is different than the one implicit in the book. Fair enough. A book is not something to be memorized but a resource for the reader to think about. We each have our own personal knowledge structures (this is the core tenant of constuctivism), which are also usually implicit. We rarely try to explicitly construct a map of a topic that reflects our personal understanding of the topic. Yet by constructing such a map one can often be much more efficient in Learning the material. In this case I was able to skim read most of the chapters, focusing on the first and last chapters. Reading is not "barking at print". Rather it is interpreting print based on one's current knowledge and understanding. By creating explicit diagrams that identify the main ideas and concepts one can continually reflect on the structure and modify it to accommodate new ideas. This is where software packages can make a real difference. It is relatively easy to modify the diagram: adding, deleting, and moving items around until one is satisfied with the result. This process of continually creating and modifying a knowledge structure requires that the reader be an active agent. One must genuinely engage with the material. I have provided a number of examples of using a structure (e.g. a table) to monitor my Learning of calculus and to facilitate time management. Such tables become a part of my Learning framework and permit me to continually reflect on how well I am doing. Sometimes this acts as a goad to focus on certain activities, and sometimes the table needs modification to reflect changing circumstances. Thus I use two different types of structural diagrams (tables and node-maps) to guide my Learning and understanding of topics that are of interest to me. By using software packages to create such structures I have an environment that is both easy to use and m perhaps more importantly, easy to modify. Such diagrams are also an excellent way to communicate one's understanding.
OPCFW_CODE
LatchBio launches the first of many evolutionary bioinformatics workflows to come LatchBio is implementing innovative approaches to streamline bioinformatic workflows through an easy-to-use browser-based platform. The bioinformatics workflows available on the LatchBio platform facilitate diverse analyses across many disciplines in the biological sciences including Gene Therapy and Editing, Cell Therapy, SARS-CoV-2 research, and Next Generation Sequence analysis. “The LatchBio platform makes trusted academic tools accessible directly to biologists with best-in-class user interfaces and a flexible cloud-native data infrastructure accessible from any browser,” says Kenny Workman co-founder and CTO. Recently, LatchBio has expanded the workflows available on our platform to include analyses for the field of evolutionary biology through a collaboration with Dr. Jacob L. Steenwyk, a Howard Hughes Medical Institute Gilliam Fellow working in the laboratory of Dr. Antonis Rokas at Vanderbilt University. As the debut workflow, LatchBio and Steenwyk release ClipKIT, an algorithm for trimming multiple sequence alignments prior to phylogenetic inference (Steenwyk et al., 2020). See and use the workflow here! Across 140,000 multiple sequence alignments that span a broad diversity of evolutionary histories, phylogenies inferred from multiple sequence alignments trimmed with ClipKIT were shown to be accurate and robust. Multiple sequence alignments trimmed using ClipKIT may help address a major goal in evolutionary biology — elucidating the evolutionary history of genes, genomes, and species. LatchBio and Steenwyk plan to continue collaborating and build more evolutionary genomic bioinformatic workflows on the LatchBio platform. “Working with the team at LatchBio is incredibly exciting,” Steenwyk said. LatchBio is solving real problems in the world of bioinformatics and simultaneously democratizing analyses. Now, with the click of a few buttons, scientists with varying degrees of bioinformatic experience can easily conduct high throughput and reproducible bioinformatic analyses — that is really powerful.” Steenwyk was one of the first user's of LatchBio's recently released Software Development Kit (SDK), an open-source bioinformatics workflow development kit. The LatchBio SDK improves upon predecessors such as WDL and nextflow by allowing Python-native development, serverless cloud resource definition (e.g., memory or GPU requirements), and dynamically generated and customizable front-end interfaces. Using the LatchBio SDK, academic python scripts become versioned, containerized, and accessible pieces of software that can be cited and shared within research communities. Shafer, M. E. R., Sawh, A. N., and Schier, A. F. (2022). Steenwyk, J. L., Buida, T. J., Li, Y., Shen, X.-X., and Rokas, A. (2020). ClipKIT: A multiple sequence alignment trimming software for accurate phylogenomic inference. PLOS Biol. 18, e3001007. doi:10.1371/journal.pbio.3001007.
OPCFW_CODE
get severity fails for 1.1.12 splunk images we are scanning these images (had no issue with older versions): splunk/fluentd-hec: (Docker Hub ) 1.2.12 (fails scan) splunk/k8s-metrics: (Docker Hub ) 1.1.12 (fails scan) splunk/kube-objects: (Docker Hub ) 1.1.12 (fails scan) when getting report, it fails with: INFO Authenticating with CrowdStrike Falcon API INFO Downloading Image Scan Report INFO Searching for vulnerabilities in scan report... WARNING MEDIUM CVE-2022-1586 Vulnerability detected affecting pcre2-10.32-2.el8.src.rpm WARNING MEDIUM CVE-2022-25313 Vulnerability detected affecting expat-2.2.5-4.el8_5.3.src.rpm ERROR Unknown error Traceback (most recent call last): File "/home/vmadmin/agent/_work/228/blueprints/templates/steps/script/cs_scanimage.py", line 368, in main f_vuln_score = int(scan_report.get_alerts_vuln()) File "/home/vmadmin/agent/_work/228/blueprints/templates/steps/script/cs_scanimage.py", line 181, in get_alerts_vuln cvss_v3 = details.get('cvss_v3_score', {}) AttributeError: 'NoneType' object has no attribute 'get' ##[error]Bash exited with code '10'. seems details returned as None, was able to get working by adding condition: λ git diff -r main diff --git a/cs_scanimage.py b/cs_scanimage.py index d97017e..c7a7172 100644 --- a/cs_scanimage.py +++ b/cs_scanimage.py @@ -181,11 +181,12 @@ class ScanReport(dict): vuln = vulnerability['Vulnerability'] cve = vuln.get('CVEID', 'CVE-unknown') details = vuln.get('Details', {}) - cvss_v3 = details.get('cvss_v3_score', {}) - severity = cvss_v3.get('severity') - if severity is None: - cvss_v2 = details.get('cvss_v2_score', {}) - severity = cvss_v2.get('severity') + if details is not None: + cvss_v3 = details.get('cvss_v3_score', {}) + severity = cvss_v3.get('severity') + if severity is None: + cvss_v2 = details.get('cvss_v2_score', {}) + severity = cvss_v2.get('severity') if severity is None: severity = details.get('severity', 'UNKNOWN') product = vuln.get('Product', {}) was able to get get pass and get severity: WARNING MEDIUM CVE-2020-26137 Vulnerability detected affecting urllib3 1.24.2 WARNING MEDIUM CVE-2021-33503 Vulnerability detected affecting urllib3 1.24.2 INFO Searching for leaked secrets in scan report... INFO Searching for malware in scan report... INFO Searching for misconfigurations in scan report... WARNING Alert: Misconfiguration found ERROR Exiting: Vulnerability score threshold exceeded: '18700' out of '500' ##[error]Bash exited with code '1'. Finishing: Crowdstrike image scanning @jhuan4 thanks for reporting this it looks like CVE-2022-2153 is returning a hash with a details key, but the value of the key is None https://github.com/CrowdStrike/container-image-scan/blob/5258ea8b242173e47954bd723d6f7d1b71bda949/cs_scanimage.py#L183 Is checking if the Details key exists, but doesn't check if the value is None I'll create a pr soon to address this issue awesome, thanks
GITHUB_ARCHIVE
You should avoid configuring a network as a transit network between autonomous systems while using BGP. In a multi-homed BGP design, connecting your company network to two ISPs might result in a transit network. Route filtering is used to prevent the advertisement of private addresses and addresses that are outside the scope of the domain when BGP routes are exchanged with multiple Internet service providers (ISPs). Only the enterprise prefixes should be advertised to ISPs. For example, let’s look at the diagram below. If the link between ISP-A and ISP-B goes down, you will become the TRANSIT for all the traffic. Keep in mind this is not just for Internet access, but also if you are running MPLS internally. There are four ways to prevent a Transit AS: 1. Distribute-list Filtering 2. Filter-list with AS PATH access-list. 3. No-Export Community. 4. Prefix-list Filtering If you wish to filter the BGP routes you send or receive depending on AS Path information, this is the procedure. Inbound and outbound AS Path filters can be used to filter the routes you transmit and receive accordingly. These filters must be applied to each peer separately. Reg-ex is used to customize this, which can be challenging in more complicated scenarios. I hardly see this being used because I think it’s too complex, and it’s best to keep things simple. Matches ONLY AS1234. So if traffic passed through any other AS, this will not work. This is only good if you only flow though one AS which is 2222. So it basically locks you down to routing through one AS. ip as-path access-list 1 permit ^1234$ route-map AS_PATH_FILTER permit 10 match as-path 1 router bgp 1 neighbor x.x.x.x remote-as 1234 neighbor x.x.x.x route-map AS_PATH_FILTER in This one opens it up more. So now we can pass through other AS’s as long as there’s AS 1234 in the PATH. ip as-path access-list 1 permit _1234_ route-map AS-PATH_FILTER permit 10 match as-path 1 router bgp 1 neighbor x.x.x.x remote-as 1234 neighbor x.x.x.x route-map AS-PATH_FILTER in The no-export community can be applied to incoming prefixes. This community informs BGP that the prefix can only be published within the AS, not to outside ASs. This is a straightforward solution that requires little setting and upkeep. Incoming prefixes can be tagged with the no-export community. This is a simple solution with little configuration and maintenance. This community must be specified on incoming prefixes to set up this. Each peer must have the send-community command enabled. ip bgp-community new-format route-map NO-EXPORT set community no-export neighbor x.x.x.x route-map NO-EXPORT in neighbor x.x.x.x send-community This one is commonly used and easy to implement. Prefix-lists are commonly used to filter inbound and outgoing traffic. It may also be used to avoid transit regions, but it must first match all prefixes learned from internal sources before filtering any other prefixes outbound. As a result, it is highly detailed but not scalable. Keep in mind that the configuration may need to be adjusted when prefixes are added, withdrawn, or altered on the network. If it’s an internet setup, you want to advertise your /24 Subnet (Assuming you have a /24). If it’s MPLS internally, you typically want to advertise out all your local LAN subnets or summarized local LAN routes if you are summarizing. ip prefix-list NO-TRANSIT_FILTER permit x.x.x.x/x neighbor x.x.x.x prefix-list NO-TRANSIT_FILTER out
OPCFW_CODE
Don’t crash and burn! This is part 2 of a 3-part series on my first glance at the professional world. Part 1 of this series can be found here. Over the last 4 weeks, I have been working on refactoring the UI of our main application. I have worked with the codebase before, but I only skimmed the surface. This project requires me to look deep into the application to understand how all the views are populated. A little background on the application. It’s a Rails 4.2 app, the front-end is handled by Bootstrap, Bourbon framework (an amazing open-source project from Thoughtbot), and a wide variety of Jquery plugins. Due to the large amount of dependencies, the serving time of the application is not as fast as we would like it to be, as well as limited support for legacy browsers ( IE6,7,8). Therefore, for this redesign, we will significantly reduce the amount of dependencies by “rolling our own” front-end. Except for the text editor and the date picker plugins, everything is written from scratch in JQuery and SASS. - How to come up with a strategy to tackle large projects. - How to not crash and burn ( a.k.a avoid burning out, and maintain productivity from beginning to end ) - Expectation management, and how to give accurate, realistic deadline. Planning, planning, planning Initially, when working on this project, I found myself trying to dive in as quickly as possible. I basically picked a random section of the application, and jumped straight in. As anyone could expect, that decision ended up coming back and biting me in the rear. Let me explain what happened, and why it’s so bad to rush in like that. There are multiple sections of the application that need to be redesigned (more than 15 different sections varying in both size and complexity). It’s logical to isolate each section into its own feature branch, then after each individual branch is fully tested, they can all be merged into a staging branch for a final integration test. At this point, it still make sense for me to randomly choose a section and work one at a time. However, my system fell apart when I realized even though the sections are separated, they share resources (assets, stylesheets, application-wide JS functions). I did not realize this until I’ve already finished 2 sections. At that point, I have to face 2 big problems: - The shared resources have already diverged slightly between the 2 branches. - How to move forward with the rest of the sections. 3 First mistake: Lack of planning and understanding of the workload on a macro level At this point, I still did not realize that I am in big trouble. Instead, I settled for the “quick & dirty” solution. I picked one of the 2 branches as the baseline, on which the rest of the sections will branch out. I rebased the baseline branch onto the second branch. Thus, at this point, I have successfully solved the problems that I have. The divergent of the shared resources has been solved. The rest of the feature branches will be based on the baseline branch. I happily moved on with the project. Lesson learnt: Know your enemy, identify and understand your problems first Second mistake: Settle for sub-optimal solution resulting in technical debt down the line Now that all of the sections are all based on the first baseline branch, the technical debt finally appears. Remember the part where I said each section need to be in its own feature branch ? It’s no longer case. Every feature branch contains 2 sections, the section from the baseline branch and the section of the feature branch. At this point, I’m in huge trouble. Any changes made to the first section has to be applied across all of my branches. That creates merging problems down the line, as well as creating noise in the commit history of the branches. Lesson learnt: Avoid the quick and easy solutions at all costs. They may save you time now, but they cost a lot more to fix later on. Third mistake: Dirty commits!!!! As mentioned in the previous mistake, a byproduct of the multiple features on one branch is the noise in the commit history. Since I was working under the pressure of time, I tend to make one big commit that modifies multiple aspects of the feature. There tend to be one big commit that adds the assets, modifies the views template, and adds new methods to the controller. It’s convenient for me to do it that way, but it is almost impossible to do any kind of code reviews on my branches. The list of file changed for each commit would be filled to the brim with assets files (images, stylesheets, JS files…) making important changes to the controller extremely difficult to find. Instead, I should have separate the commits into 3 categories: supporting commits ( adding assets), non-breaking changes (add/edit/remove views templates), and breaking changes (changes made to the controllers/models/routes). This way, anyone could quickly identify and prioritize the breaking changes to review. Lesson learnt: Move irrelevant information of the commits. Separate breaking and non-breaking changes into separate commits. Keep the commit messages concise and meaningful. As a recap, here are the lessons that I learnt: - Know your enemy, identify and understand your problems first - Avoid the quick and easy solutions at all costs. They may save you time now, but they cost a lot more to fix later on. - Move irrelevant information of the commits. Separate breaking and non-breaking changes into separate commits. Keep the commit messages concise and meaningful. This blog post has grown to be long to for a quick read, so I decided to move the rest of the concepts: avoiding burning out, and expectation management to the another post.
OPCFW_CODE
Run pip in (conda) virtualenv without activating it first As in the title, I've created some env in conda like so conda create -n myenv python=2.7 I'd like a command to run pip install inside the env without doing source activate myenv first. Is this possible? I believe whatever method I use would have to work out the PATH etc. By default, environments are created at /path/to/anaconda/envs/env-name, you can look there. Why do you need to do this? @darthbith Useful if you have a python script to create multiple environments to set up a new user. I've gotten it to work with conda run -n [env_name] pip install [dependency] or something similar. But, it's not working right now on Windows. Any idea? @illan If you're creating multiple environments to set up a new user, I'd suggest using environment.yml files, which can already encode pip dependencies. Then you don't need to run pip for anything. Assuming you are using Anaconda in a Bash shell environment, one option would be to add the Anaconda bin path for your created virtual environment to your PATH variable in such an order that the pip binary in your virtual environment comes before the system pip. If your Anaconda virtual environment is located in the default .conda directory in your home directory you could do this as follows: export PATH=~/.conda/envs/myenv/bin:$PATH If you wanted this to be the default behavior for your shell environment you could add the above command to your ~/.bashrc file. This method is similar to the approach reccomended for setting the PATH variable to enable you to use the Anaconda binaries for the "root" environment, detailed in the Anaconda documentation. link is outdated now? Thanks @Matifou. I've just fixed the broken link. Yes, this is possible! Conda run does the same as conda activate / command / conda deactivate. command_install = f'conda run -n {env_name} python -m pip install {shared_dep}' So, for example: conda run -n nlp python -m pip install transformers Note that you must first install pip in the environment. Otherwise, the environment's pip is a link to the system pip and everything will install to the base python/conda interpreter. conda install -n nlp pip -y Another solution that may work for you (it worked for me to script pip installs from command line) is to use & to tunnel commands. For instance: conda create -n myenv python=2.7 & conda activate myenv & pip install {some packages} & conda deactivate That works for me on Windows machines and it should work the same way on Linux as well. Information regarding the & command However due to the way that pip is set up I don't think there's a way around activating the environment first. Hope this at least helps!
STACK_EXCHANGE
THE 2022 UN BIG DATA HACKATHON: APPLY NOW! Apply for the 2022 UN Big data Hackathon Application Deadline: 15 September About the UN Big Data Hackathon Following the AIS Big Data hackathon of 2020 hosted by the United Nations Statistics Division (UNSD) , UNCTAD, UN Global Pulse, Marine Traffic, and CCRi, and the UN Youth Hackathon of 2021 hosted by the UNSD, MGCY, and UNGP regional hubs, this year, The United Nations Committee of Experts on Big Data and Data Science for Official Statistics (UNCEBD), Major Group for Children and Youth (MGCY) and Statistics Indonesia (BPS), is hosting the “2022 UN Big Data Hackathon”. It is jointly organized by the United Nations Statistics Division (UNSD), Global Platform Regional Hubs (Rwanda, UAE, Brazil & China), UN Global Pulse, Asian Development Bank (ADB), Islamic Development Bank (IsDB), UK ONS Data Science Campus, Statistics Canada, and United Nations Conference on Trade and Development (UNCTAD), in consultation with the members of the Task Teams of UN Committee of Experts on Big Data and Data Science. The 2022 UN Big Data Hackathon will happen over the course of 4 days (November 8-9-10-11 2022). It will consist of two tracks; the ‘Big Data Expert’ track and the ‘Youth’ track. The ‘Big Data expert’ track will bring together teams of Big Data experts, whereas the ‘Youth’ track will be targeting teams of students or young professionals (under the age of 32). Both tracks can either apply to attend the event in person in Indonesia, UAE, Rwanda, and Brazil (with other on-sites being considered) or virtually. Limited funding is available for in person attendance. The Hackathon aims at developing ideas and solutions to help achieve the Sustainable Development Goals and assist in resolving Global challenges, in line with a theme that will be revealed closer to the event dates. Registrations should preferably be done as a team of 3 to 5 people. Individual registrations will be sorted and assigned into teams based on the application form and skill levels. Please fill out this survey ONCE for the whole team before September 15, 2022. The selected participants will be announced on September 24, 2022. Follow the link below to register for the UN Big Data Hackathon: Register here To join the webinars and receive all updates regarding the Hackathon, please click here to join the mailing list. For any questions or concerns, please reach out to us on: email@example.com ALSO CHECK: APPLICATIONS ARE NOW OPEN FOR THE 7TH EDITION OF THE INTERNATIONAL YOUTH TO YOUTH SUMMIT 2022 To receive more opportunities: Join our WhatsApp group and Telegram group. Visit our social media pages:
OPCFW_CODE
HP MINI BROADCOM WIRELESS DRIVER DETAILS: |File Size:||5.2 MB| |Supported systems:||All Windows 32x/64x| |Price:||Free* (*Free Registration Required)| HP MINI BROADCOM WIRELESS DRIVER (hp_mini_3848.zip) Bluetooth Driver Software. Virtually every new release of kb leads to wireless problems with my hp mini 110 and its broadcom 4312 802.11b/g/n wlan/bluetooth module. Does anyone know the best wireless adapter. This is the latest broadcom bcm43xx wireless adapter driver for your computer. I have one more problem that i don't have with w7. Choose to select the location of the new driver manually and browse to the folder where you downloaded the driver about wireless lan driver, windows oses usually apply a generic driver that allows. However, this wireless technology is recommended to be used for transferring data between compatible devices within short ranges. Broadcom Virtual Wireless Adapter. Hp compaq 6715b notebook broadcom wireless lan driver 6.10 a windows xp was collected from hp official site for hp notebook. He says, it may be my broadcom wireless network adopter's problem. 9.20 for windows 7 x32/x64, vista x32/x64, xp x32/x64 broadcom 2070 bluetooth driver and software ver. I have tried several approaches found here and it looks like the interface is recognized, but still see no wireless networks in networkmanager, nor can i activate wireless in systemsettings. I don't know which wifi card i have, label's rubbed of, but google says its a wlan 802.11a/b/g/n broadcom 4322agn . This version does not list support for windows 10. Right now i'm using win7 64bits ultimate , but i've tried windows 7 32bits too, and everything you and some other people other forums posted, including the older version of the broadcom driver you posted. This driver supports 802.11i/ wpa2 for wlan cards that are capable of 802.11i. This package contains the files needed for installing the broadcom wireless 802.11b/g adapter driver. I am having a hp pavilion g4, microsoft windows 7 professional 64 bit laptop with the currently installed broadcom 802.11n network adapter 126.96.36.199 driver dated 07 oct 15 with details of the driver as under , driver files , c, \\windows\\system32\\ c, \\windows\\system32\\. Hp 2133 mini-note pc broadcom wireless lan driver 7.20 windows xp was collected from hp official site for hp notebook. This driver below is for the broadcom 802.11n network adapter using a windows 10 operating system. And this is as far as i go with my knowledge and i don't know how to follow. This package contains drivers for the supported broadcom wireless lan adapters installed in the supported notebook/laptop models running the supported. These chipsets are not natively supported by centos. I'm currently trying to get wireless working on a hp-mini which has a broadcomm chip. This package provides broadcom wireless-dw 1560 is supported on xps 9343 running the following operating systems, windows 8.1. Broadcom wireless lan hp pavilion g6. AXIS PAD GGE900 WINDOWS 10 DRIVER. This package provides the driver for broadcom bcm4352 wireless bluetooth 4.0 driver and is supported on alienware that are running following windows operating systems, windows 8. My first review on youtube hope you all appreciate budget bluetooth speakers from hp. Re, dw1501 wireless-n wlan half-mini card i have purchased 5 latitude e5520's over the last year for our company, and all these have the 1501 card in them, and apparently will top out at 74mbs. Broadcom 802.11n network adapter driver is an important driver package that can enable your pc to gain full access to features and services offered by the networking hardware created by this case, a wireless 802.11n wi-fi module that can be used to create wlan networks of all sizes, where your home pc or laptop can get in contact with other network objects such as home or work pcs. Solved, i am looking for the correct driver for linux for the broadcom limited bcm43142 802.11b/g/n 14e4, 4365 rev 01 in my hp envy 15-k167cl. This package does not install broadcom ihv extensions on hp notebooks that do not support cisco compatible extensions. Download Monster Pusat V4 Gaming Mouse Drivers For Windows, Mac And Linux. However, in order to use all available features of this hardware, you must install the appropriate. Broadcom bcm430n driver download and update for windows. This download installs base drivers, intel proset for windows device manager*, and intel proset adapter configuration utility for intel network adapters with windows 10. Try to set a system restore point before installing a device driver. The driver was brought out by dell and is version 188.8.131.5261 of the driver. Intel wireless bluetooth driver for windows 10 64-bit for intel nuc version, 21.50.0 latest date. I used to run a debian build, sparky linux, which required me to run modprobe b43 to get wifi working, however when i do this in mint nothing seems to happen, still only ethernet connections listed not plugged in . Using warez version or not proper hp mini 1000 broadcom wlan driver driver install is risk. The wireless card can be found in the vostro a90 and inspiron mini 9 910 dell computers. Download latest drivers for broadcom network on windows 10, 8, 7 32-64 bit . The broadcom driver must be installed first before any others. Broadcom doesn t support open-source much at all. Note, a wireless network must be set up in order to establish a wireless connection. Check your hardware and software before you install the driver. Can Creative. Broadcom bluetooth on 32-bit and 64-bit pcs. I want to install the latest wlan driver maybe from another laptop model for my wireless card to see. DOWNLOAD Need. I cannot find any better driver. We would also be happy to hear any ideas you have on how to improve our website. Hi, i have a dell xps8300 with a dw1501 wireless-n wlan half-mini card. Package does not install broadcom ihv extensions on hp notebooks that do not support cisco compatible extensions. I want to install the latest wlan driver maybe from another laptop model for my wireless card to see if it will resolve my problem or not. When i uninstalled the ati driver then installed the broadcom driver and reboot. Access broadcom's customer support portal to obtain warranty information, find documentation and downloads, and answer other questions you might have. Broadcom 802.11 linux sta wireless driver source. Ubuntu linux doesn't automatically support hp mini 210 broadcom b43 wireless card. Broadcom virtual wireless driver issue with windows 10. This driver was last updated in 2011 and is still the current driver if you have a broadcom bluetooth device in your computer. The package provides the installation files for broadcom 802.11ac network adapter wireless driver version 7.35.338.0. These highly compact socs integrate all functions such as mac, phy and rf, and deliver industry s leading connectivity experence to end users, while lowering implementation costs for oems. How to install 802.11n usb wireless driver step by step - duration. M.com will be undergoing maintenance and will not be accessible from 2am pst to 5am pst on saturday february 15th, 2020. The choice of which driver your card uses. Find wireless, wifi, bluetooth driver and optimize your system with drivers and updates. Red Hat Product Security. This package contains drivers for the supported broadcom wireless lan adapters in the supported notebook/laptop models and operating systems. Thank you for posting your query in windows 10 insider forums. Installs the intel proset/wireless software for bluetooth driver for the intel bluetooth adapter installed in the intel nuc. Broadcom wlan hp pavilion dv6 5.60.350.6. Latest downloads from broadcom in network card. In internet explorer, click tools, and then click internet options. Drivers are designed to work with chips series bcm43xx standards 802.11 a/b/g/n on supported operating systems windows 10 - 32/64bit.
OPCFW_CODE
I'm not sure if you really have any problem here. Actually here is common mistake made by a lot user that are not into programming. GPU rendering is NOT faster than a CPU rendering, GPU rendering is SOMETIMES faster than CPU rendering, depending on the rendering algorithm it's used and your graphic card your OS the pipeline etc... Blender don't use the classic graphic pipeline that the graphic card are design for (they use raytracing instead of rasteriser) so from this point all is about how fast and efficient are every computation. A little technical explanation: GPU programming is one of my favourite topic in computer science so I will try to keep it simple. Actually CPU is A LOT faster than a GPU for one computation (it's like comparing a smartphone with a desktop CPU, really) and GPU love making all at same time (that why you can use Shift+Z in cycle). Basically if computations are not dependant between them GPU win. -> So usually the problem is more you have dependency between computations slower it will be on the GPU. What I think for your benchmark: You use some shader/material/effect that make dependency between computation or make the GPU wait for some synchronisation and make you GPU render less efficient... So blender has some optimisation problem ? No, if GPUs were just more efficient than CPU what is the point of keeping CPU ? No it's just a trade, you just have to know what you want to do with your scene (render optimisation you know...). Btw that's why blender keep CPU rendring... The GPU is really better when you want real time using a lot a tricks that use parallelism a lot. that's also why a PS4 has 1.6 GHz but with 8 cores (for parallelism again) but that's an other story -> You really have to keep in mind I just THINK this is the problem I don't have your scene to deeply test it or trying track which effect/shader/material take time. Hope that help. =) Even if what I said is true I said also "I just THINK this is the problem" and in fact I've just tested your scene and checked this thread: On my AMD AMD A8-5550M APU with Radeon(tm) HD Graphics Processors (AMD Radeon HD 8550G) with my gentoo kernel 3.17.8-r1 and proprietary driver fglrx 14.12-r3, I have: - GPU: 5:54:54 - CPU: 17:09:73 which is a pretty huge gap... So now look at the thread, for OSX you have 2 type or person: - GPU-> not supported - GPU-> slower So the scene is obviously GPU optimised and what I said before is not the real problem here. The thing you have to know is OSX has a lot of problem with his graphics drivers (for exemple if you try openGL you are stuck with openGL 3.3). From here what I think is your render time difference is due to Apple's drivers. I don't work for blender or Apple so I am not 100% sure but from I see here the problem seems obviously coming from Apple. To be really sure I will make some test on the Mac of my room-mate soon. PS: I'm really surprised the graphics driver from Apple was that bad, I thought it was just "bad" not "really bad"
OPCFW_CODE
How come I don't built a great deal of strength quickly, yet I see countless other lifters who do? I have outlined potential reasons as to why some get stronger faster and easier: 1.Testosterone - while it's not 100% correlated with strength, more promotes more muscle mass, which increases potential for strength. Perhaps I have low testosterone? 2.Genetics - while they are not a 100% make or break, some people have very poor genetics when it comes to proper CNS use, muscle size, efficiency, etc. I have poor balance as well, and lose breath running barely 50 feet, being at a healthy weight. Maybe I have bad genes for powerlifting/fitness, and should just be a lazy, weak nerd? 3.Biomechanics - while they again do not fully dictate things, and genetics also can cover this, some people have larger limbs, digits, hands, thickness, skeletal frame, leverage, muscle placement, recovery speeds faster, etc. Maybe I have poor recovery speed, and bad biomechanics as well? 4.Stress - I have constant anxiety and sleep poorly, and suffer from stress a lot due to social anxiety. Maybe that explains why I so poorly can improve? So my question is, and being truthfully, aside from taking a buttload of roids, and working out, are some people just destined to fail easily at weight training? Paradox notes: I notice that after one week off or so I get dramatically weaker, but if I workout more than once a week I fatigue and overwork, and suffer from pain. I have worked out for years, but fatigue after several reps, and suffer pain for days if I do too many reps. My focus is only strength, so I do tend to stay with low reps, but I never get stronger. I even tried 5x5, but nothing. Perhaps I've learned something in these years? Should I just throw in the towel and agree that aside from roids, I am a low-T, poor genetic mold of a person who works out years properly, with proper diet, and still can't compare to the average man, and admit that weight lifting is not for me? Or is there an alternative? And do not say working hard because I have done that all my life, used proper technique, proper diet, and have not made any gains in years. How are we to know what is not working in what you do if we don't know what it is that you do? Tell us the specifics of your workout schedule and diet, including sets reps weights exercises and how often and what you eat and how much...and we might be able to debug what the issue is. Without that info, this is way too broad. Instead of considering what you are doing right you should be focusing on what the people you are describing are doing right. Do these people actually exist? If so, what are they doing? It sounds like you've done what a lot of beginners do: Jump around with different things until you find something that makes you magically strong. You should instead be focusing on 3 things: Diet, your program, sleep. In that order. You can't get stronger if you don't eat enough, you can't get stronger if you don't stick to a program, and you can't get stronger if you don't give yourself time to recover. Granted I don't know who the people you are comparing yourself to are, but I can tell you that I've countless people (myself included) throughout the years get stronger just by setting those three things. Do you have a real diet? By real I mean having calculated caloric intake and measuring it. Do you have a program that works via progressive overload? Have you stuck to a program for more than just a few months? I'd focus on settling into a real diet and program rather than thinking about whether you have low testosterone or need steroids.
STACK_EXCHANGE
The architecture of an Oracle 10g database After completing this topic, you should be able to distinguish between the basic components of Oracle database architecture. In this exercise, you're required to identify and distinguish between the files, memory structures, processes, and tools in Oracle Database 10g. This involves the following tasks: - identifying database core files - distinguishing between storage units - recognizing memory areas - distinguishing between processes - recognizing elements of the data dictionary Task 1: Identifying database core files You are a database administrator and your company is changing over to Oracle Database 10g. You want to refresh your knowledge of the architecture and framework of Oracle Database 10g. First, you have to make sure you know what the constituent files of the database are. Step 1 of 5 Which of these files make up the core files of an Oracle database? - Archive log files - Control files - Data files - Redo log files The Oracle database is made up of control files, data files, and redo log files. Option 1 is incorrect. While the archive log files are part of the database, and contain an ongoing history of the redo logs, they are not database core files. Option 2 is correct. Control files contain data about the status of the physical data files stored in the database. Option 3 is correct. Data is stored in the database in data files. The database is made up of one or more tablespaces, each of which can contain one or more data files. Option 4 is correct. Redo log files are used to help recover database instances if the original data is lost due to a system failure, such as a power outage or a computer fault. Task 2: Distinguishing between storage units Now, you want to refamiliarize yourself with how data is stored in Oracle Database 10g. Step 2 of 5 Match the storage units to the appropriate descriptions. - Data blocks - Used to group related logical structures together - The smallest unit of data used by a database - Used to contain database objects, such as tables and indexes - Made up of contiguous data blocks Tablespaces are used to group related logical structures together. Data blocks are the smallest unit of data in a database. Segments are used to contain database objects, such as tables and indexes, and extents are made up of contiguous data blocks. When a set of data blocks is requested by the database, the OS aligns them with real OS blocks on the storage device. As extents are made up of contiguous data blocks, each extent can only exist in one data file. A segment is made up of one or more extents. Tablespaces contain one or more data files, which physically store the data of all the logical structures in the database. Task 3: Recognizing memory areas You need to understand how memory areas perform various functions in Oracle Database 10g. Step 3 of 5 Match the memory areas with their characteristics. - Where the Oracle instance holds data in buffers and memory caches - Used to store data and control information for each server process - Contains a shared pool, a large pool, a Java pool, and a streams pool - Private to its server process, and is read and written by Oracle code acting on its behalf The SGA is where the Oracle instance holds data in buffers and memory caches. It contains a shared pool, a large pool, a Java pool, and a streams pool. The PGA is used to store data and control information for each server process,is private to its server process, and is read and written by Oracle code acting on its behalf. Task 4: Distinguishing between processes It's also essential that you know the difference between a background process and a server process. Step 4 of 5 Which of these are background processes? - Database writer - Enterprise Manager - Process monitor Checkpoint, database writer, and process monitor are all background processes. Option 1 is correct. The checkpoint tells the database writer about data changes. It also informs the database's data files and control files about the most recent checkpoint. Option 2 is correct. The database writer is used to write the changed data from the database buffer cache to the long-term store on the hard disk. Option 3 is incorrect. The Enterprise Manager is not a background process, it is an Oracle tool. When invoked, it creates a server request. Option 4 is correct. When a user process fails, the process cleanup is performed by the process monitor. Task 5: Recognizing elements of the data dictionary Finally, you should understand how the data dictionary functions in Oracle Database 10g. Step 5 of 5 Which of these are characteristic elements of the data dictionary? - It's created at the same time as the database - It's updated when the structure of the database is updated - It performs crash recovery - It is used by Enterprise Manager to show information about tables and views The data dictionary is created when the database is created, and it's updated whenever the database is modified. It's also used by the Enterprise Manager to show information about tables and views. Option 1 is correct. The data dictionary is a read-only reference containing information about the sets of tables and views in a particular database. Option 2 is correct. The data dictionary also contains the allocated space for a schema object and the amount currently in use. Option 3 is incorrect. It is the system monitor, a background process in the PGA, that performs crash recovery. Option 4 is correct. The DICTIONARY view in the Enterprise Manager provides useful descriptions of the data dictionary tables and views.
OPCFW_CODE
I would like to move the beans_headerinto a pre-built div in a template - not wrapping in a new div (with echo beans_open_markup) but just call it/move it to another div in the html markup. I have tried the following: <div class="myDiv"> <?php echo beans_open_markup( 'beans_head', 'head' ); do_action( 'beans_head' ); wp_head(); echo beans_close_markup( 'beans_head', 'head' ); ?> </div> but no luck... beans_head are not the same. beans_head is the head par of the page which contain all meta tags scripts etc. while beans_header is where the logo and nav are. I am not sure I understand 100% where you want to move the header, do you have a markup id which you want to move the header to? If you want to manually call it on your HTML what you can do is removed the action which attach the header to beans_site_prepend_markup and then call the function manually in your HTML. So like this: beans_remove_action( 'beans_header_partial_template' ); Then you can call the function in your HTML as such: <div class="myDiv"> <?php beans_header_partial_template(); ?> </div> If instead you wanted to move it, then you would change the hook to what ever you want. Let's assume you want to move it above the footer for the sake of the example, you would simply have to do so: beans_modify_action_hook( 'beans_header_partial_template', 'beans_footer_before_markup' ); Hope that makes sense, Thanks the examples above kinda work however i then get two menus? Essentially i was trying to move the menu to within a div that has a background image(hero image)...the menu would have a transparent background so that the image from the containing div would show through and float above the hero image. I have however since given the header section a position of absolute which then alows the menu to sit above the hero image section now. Weird that you have two menus, some of your code might varie or you perhaps added all snippet above. I am not sure if you already know but If you want to add a div around any markup you can use the beans_wrap_markup() function (see its code reference here). So if you wanted to add a div around beans_header, you could do as follow: beans_wrap_markup( 'beans_header', 'example_header_wrap', 'div', array( 'class' => 'example-class' ) ); If you wanted to add a div inside a div wrapping all children, then you would use beans_wrap_inner_markup() (see its code reference here) in the same fashon. Hope that helps, Cool thanks Thierry. All working now, I was heading on the wrong tangent with what i was trying to achieve.
OPCFW_CODE
Senior Technical Writer (API) - Part-Time Remote or Durham, NC Orbis Technologies, Inc. provides award-winning products, solutions, and services powering enterprise software for hundreds of clients across four continents and fourteen countries. Orbis software and services support mission-critical Enterprise Content Management Software, Solutions, Services, and Analytical Software Solutions. We make semantic technologies perform on a global scale. Every day, thousands of users at our prestigious commercial, military, and federal customers depend on our software and solutions. When other partners miss the mark, Orbis gets the job done. Orbis is seeking a Senior Technical Writer with experience writing software documentation and an interest in getting into developer-oriented (API/SDK) and end user documentation. This is a position for a technical writer who has worked on multiple kinds of projects previously and enjoys applying their expertise and intellect to learning new things. It is a client-facing role, involving supporting various high-tech companies with their documentation production. Remote work is available for this position. Essential Duties & Responsibilities: + Develop content to clarify the users' understanding of the REST APIs using the software developer notes and your own understanding of the code: reference documentation, conceptual information, user and integration guides. + Annotate comment fields in JSON, YAML, HTML, XML, and other related file types. + Author content in MadCap Flare or related tools (e.g. Document! X). + Store content and manage version control using content repositories such as GitHub. + Collaborate with a team of writers, solutions architects, developers, project managers, subject matter experts, and other various stakeholders. + Ensure that the documentation you work on is accurate and follows clearly defined standards identified for a given project. + U.S. citizenship + 5+ years experience in technical writing with technical writing being your primary job function. + Bachelor's degree in technical writing, journalism, communications, or related field. + Aptitude for understanding software products and code. + Experience with researching and gathering relevant technical data from subject matter experts such as software developers and programmers, product managers, customer service managers, and as well as client resources, such as solutions design documents, JSON files, Jira tickets, and/or messaging tool comments. + Experience with: Visual Studio Code, NotePad++ or related editor; HTML, JSON, YAML, XML (preferred) + Some experience or at a minimum a conceptual grasp, interest, and aptitude for writing developer documentation, including knowledge of API reference and integration guides. + Possess excellent interpersonal and general business communication skills. + Experience working independently as well as in a team-oriented, collaborative environment. + Demonstrated success working in an environment driven by budget, scope, and deadlines across multiple clients/projects. Preferred Skills & Qualifications: + Visual Studio Code, NotePad++ or related editor + REST API's + MadCap Flare Optional Experience Preferences: + Oxygen XML Editor + Microsoft Visio + Adobe Illustrator + Adobe InDesign + Adobe FrameMaker + Microsoft Word + Adobe Acrobat Professional + Articulate 360 Please note, this job description does not cover or contain a comprehensive listing of activities, duties, or responsibilities required of the employee for this job. Duties, responsibilities, and activities may change at any time, with or without notice. Work Hours & Location: Part-time, remote position with hours of operations from Monday to Friday, between 8 am to 5 pm. Must be located in the United States. Keyword: HTML, XML, SDLC, MadCap Flare, developer documentation, API, Visual Studio From: Orbis Technologies
OPCFW_CODE
On the etymology of "conundrum" The word conundrum "sounds" very Latin (or at least, it does not sound English enough to me). Yet, it seems its origin is unclear. Wiktionary states: A word of unknown origin with several variants, gaining popularity for its burlesque imitation of scholastic Latin, as "hocus-pocus" or "panjandrum". If there is more to its origin than a nonce coinage, Anatoly Liberman suggests the best theory is that connecting it with the Conimbricenses, 16C scholastic commentaries on Aristotle by the Jesuits of Coimbra which indulge heavily in arguments relying on multiple significations of words. In effect, this blog, sponsored by the Oxford University Press, proposes seven alternative origins. Three are directly related to Latin (quoting in extenso from above): Since conundrum means “pun” and presupposes an imaginary or fanciful agreement between some two objects, the etymon may be Greek koinon duoin (Latin commune duorum); substitute Latin duorum for Greek duoin, and you will get a good approximation of conundrum. Perhaps conundrum is a modified and disguised form of Latin conventum “agreement.” For v the letter u often turns up in books. Conuentum could have been misunderstood and mispronounced as conundrum. One of the citations in the OED runs as follows: “These conimbrums, whether Reall or Nominall, went down with Erasmus like chopt hay.” (1651; the first citation of conundrum goes back to 1586.) Here is the etymology, published in The Nation 57, 1893, No. 1481, p. 370 and signed by the initials C.S.P.: “There surely can be no doubt what this word [that is, conimbrums] is. The reference to realists and nominalists shows that something in the scholastic philosophy is referred to; and ‘conimbrum’ is easily recognized as meaning argumentum Conimbrienum. The doctors of Coimbra, in their celebrated commentaries published in the sixteenth century, have in all cases a great deal to say of the ‘multiplex significatio’ of one word and another. Indeed, such remarks are their great weapon. They used it for all it was worth, and a little more. Accordingly, a dealer in verbal quibbles might naturally have been called by Oxford students a Conimbricus, and his quillet Conimbrienum argumentum. The original c, which this hypothesis requires, is preserved in another old form of the word ‘conuncrum’. Conimbrica was in the sixteenth century the most usual Latin form of the name Coimbra, though Conimbria is also common. Certainly fascinating stuff. Now, as experienced Latinists, you might be able to give some lights on this issue. For instance, another instance of word transformation going through one of the above channels? Or perhaps evidence that commune duorum is a rare phrase in (mediaeval?) Latin? I am curious on your thoughts on this. Surely a definitive answer will not be given, but any light on the issue would, imo, be a good answer. Sadly, I think you are right, and myself doubt very much that any useful insights can be provided. Indeed, I strongly suspect that the writer of your quoted OED citation may well, in frustration, have been cunningly turning the question into a conundrum of his own. No matter how many etymologies may be put forward, I think that we should accept that there's never going to be one that's really satisfactory. [On a technical ppoint :a conundrum is not a pun precisely, but a riddle or puzzle needing a bit of lateral thinking to solve it. It is often answerable by punning. Punning itself is exemplified by the (good, but untrue) story that General Sir Charles Napier sent the one-word dispatch ‘Peccavi’ after conquering the Indian province of Sind in 1843. ‘Peccavi,’ I have sinned, is spoken just as I have Sind. Any Americans reading may recognize another, in a Presidential electioneering effort from the nineteenth century in 'We Polked You in ’44, We Shall Pierce You in ‘52'. Neither is the answer to a conundrum!]
STACK_EXCHANGE
/*====================================================================* * * regexp *regexmake(char const *string); * * regex.h * * return a pointer to a structure that represents the regular expression * pattern described by the string argument; * *. Motley Tools by Charles Maier *: Published 1982-2005 by Charles Maier for personal use *; Licensed under the Internet Software Consortium License * *--------------------------------------------------------------------*/ #ifndef REGEXMAKE_SOURCE #define REGEXMAKE_SOURCE #include <limits.h> #include <string.h> #include <ctype.h> #include "../regex/regex.h" #include "../tools/memory.h" #include "../chrlib/chrlib.h" regexp *regexmake (char const *string) { char buffer [UCHAR_MAX + 1] = { 0 }; regexp * pattern = (regexp *) (0); regexp * current = (regexp *) (0); if (string) while (*string) { if (pattern) { current = current->next = NEW (regexp); } else { current = pattern = NEW (regexp); } memset (current, 0, sizeof (regexp)); switch (*string) { case REGEX_C_ESC: if (*++string) { current->exclude = false; buffer [0] = (char) (chruesc (*string)); buffer [1] = (char) (0); string++; } else { current->exclude = false; buffer [0] = (char) REGEX_C_ESC; buffer [1] = (char) (0); } break; case REGEX_C_SRT: if (*++string == REGEX_C_NOT) { current->exclude = true; string++; } else { current->exclude = false; } string = charset (string, REGEX_C_END, buffer, sizeof (buffer)); if (*string) { string++; } break; case REGEX_C_ANY: current->exclude = true; buffer [0] = (char) (0); buffer [1] = (char) (0); string++; break; default: current->exclude = false; buffer [0] = *string; buffer [1] = (char) (0); string++; break; } current->charset = strdup (buffer); switch (*string) { case REGEX_C_KLEENE_ONCE: current->minimum = 0; current->maximum = 1; string++; break; case REGEX_C_KLEENE_STAR: current->minimum = 0; current->maximum = REGEX_COUNT_MAX; string++; break; case REGEX_C_KLEENE_PLUS: current->minimum = 1; current->maximum = REGEX_COUNT_MAX; string++; break; default: current->minimum = 1; current->maximum = 1; break; } current->next = (regexp *) (0); } return (pattern); } #endif
STACK_EDU
Choosing clock buffering circuit The modular 5V-powered design's module is having two output clocks (21 MHz and 3.5 MHz) to other modules in the system. I am looking for the best way for buffering these signals so that they would be enough fanout / strength and least distortion for several distant wires attached within the system. Length can be considerable, up to 60 cm (25 inches). The choices are: use LVC1GU04 unbuffered inverter (the same type used in the Pierce oscillation circuit), use buffer like LVC1G126, or LVC1G14 Schmitt trigger. During investigation I found out several pieces of information making me a little stuck with making the conclusions. The functional difference between LVC1GU04 and LVC1G126. Comparing the datasheets I see no much difference. Historically, I would expect 1G126 to be more load-capable, but here it looks like just buffer with enable. Thus is there any rationale on using 1G126? This document called Use of the CMOS Unbuffered Inverter in Oscillator Circuits says: An unbuffered inverter itself may not have enough drive for a high-capacitive load. As a result, the output voltage swing may not be rail to rail. This also will slow down the edge rate of the output signal. To solve these problems, a buffer or inverter with a Schmitt-trigger input can be used at the output of the oscillator. But I do not see proof of this in the datasheets of 1GU04 and 1G126, both circuits are being tested by the 30 pF and 50 pF loads. The starting, general clock buffering circuit I obtained looks like the follwing: It uses standard DIP packaged 74LS04, series resistor to match impedance (and I guess limit output current), and pull up to ensure high level is maximally close to 5V as LS04 is TTL and is not rail-to-rail. Is there any better buffering circuit for the LVC1G(U)04 or other related small logic 1G chips in this family? What would be the frequency limit of the input clock? I am looking for guidance. I can not change design and length of the traces in other modules, thus the effort is only about designing proper buffering/redriver circuit. For that distance, I would be looking at something designed to be a backplane driver (Iol and Ioh of at least 20mA and preferably more). TI (and others) have such things. Take a look at: https://www.ti.com/logic-circuit/buffer-driver/non-inverting-buffer-driver/products.html Thank you for the link! According to the list, all three chips LVC1G04/LVC1G126/LVC1g14 with +-32mA would satisfy these requirements. It's not clear what can and can't change in your design. For the 21MHz clock my first concern would be cabling to make sure you have a relatively controlled transmission line. Also keep in mind that you can connect gates in parallel to increase the driving strength.
STACK_EXCHANGE
easier way to create arrays While the idea of using a function to create a nested namespace is neat, it makes creating arrays harder: import marray import numpy as np xp = marray.masked_array(np) a = xp.asarray(np.arange(10)) a.mask[...] = np.arange(10) % 2 == 0 What I'd like to have (but don't quite know how easy it is to support, nor if it actually is a good idea) is something like this: import marray import numpy as np a = marray.MaskedArray(data=np.arange(10), mask=np.arange(10) % 2 == 0) xp = a.__array_namespace__() # nested namespace Alternatively, this could also work (since the Array API doesn't forbid adding non-standard things to the namespace): xp = marray.masked_array(np) a = xp.MaskedArray(data=np.arange(10), mask=np.arange(10) % 2 == 0) However, I would imagine that this makes creating / composing arrays a bit harder. And since we don't subclass arrays classes anymore, maybe we don't even need the dynamic namespace (and thus the meta-programming)? Your second example should work. And asarray also accepts the mask. Have you tried import marray import numpy as np xp = marray.masked_array(np) a = xp.asarray(np.arange(10), mask=(np.arange(10) % 2 == 0)) ? Currently mask doesn't appear in the documentation, because it's just copied from NumPy, but that can come later. That's interesting, I totally missed that. I was under the (quite possibly mistaken?) assumption that the Array API didn't allow adding additional arguments, so I didn't even bother to check. I'm not certain, but NumPy 2.1 asarray has an order argument that is not in the standard. @keewis this is now: import marray import numpy as np mxp = marray.get_namespace(np) a = mxp.asarray(np.arange(10), mask=(np.arange(10) % 2 == 0)) # MArray( # array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), # array([ True, False, True, False, True, False, True, False, True, # False]) # ) which is no longer than the suggestions in https://github.com/mdhaber/marray/issues/6#issue-2677340080. marray does not currently expose anything other than get_namespace. It is possible to expose a class MArray that infers the namespace from the arguments: # draft just to illustrate # subclasses might not work correctly as-written class MArray: def __new__(self, data, mask=None): xp = data.__array_namespace__() mxp = get_namespace(xp) return mxp.MArray(data=data, mask=mask) But my impression from your second suggestion is that four lines is short enough. Is this issue resolved? to make it clear, the requirement to create the namespace first feels inconvenient It is possible to get around this for any function that accepts an array as input, but it is not without downsides. The documentation would be independent of the underlying array's documentation. Array creation functions will not return arrays of the desired type unless we add an argument. You have to think a bit more about what to do when the user passes arrays of multiple types. Moreover, it requires a fair amount of internal restructuring. This is all quite a price to pay for users adding a single line in addition to the import. Maybe a MArray 2.0 thing, since almost nothing would change about user code? I'd rather get something working within the existing structure for now. other than feeling somewhat uneasy about modifying the signature of asarray, which I had believed should not be modified We do not currently follow the positional-only, keyword-only conventions perfectly, so I think there are bigger fish to fry w.r.t. to the functions accepting more arguments than they should. So that aside, with mask unspecified, we are (I think) exactly as array API compatible as the underlying library. With some elements masked, the behavior doesn't follow the array API; it is adapted for the presence of the mask. Why in that case does accepting a mask stand out as a problem? closing now that we can use asarray, right? I think so. I think the only possible alternative was that we could expose MArray as a class, and then it could be: import numpy as np import marray x = marray.MArray([1, 2, 3]) So in theory you could instantiate an MArray and use its methods in three lines instead of four. But as soon as you need to get functions from the associated namespace, you'd need: mxp = x.__array_namespace__() adding an ugly fourth line. I don't think this is worth it, considering the alternative is also four line long, looks cleaner, and avoids having to make the MArray class public. I was thinking that this pattern: import marray x = marray.MArray(np.array([1, 2, 3])) xp = x.__array_namespace__() xp.exp(x) was what you'd always get with the array API, unless you use the namespace directly (or at least, that was my impression when working on xarray). Under that assumption the closest to ideal would be this: import marray as ma x = ma.MArray(np.array([1, 2, 3])) ma.exp(x) # equivalent to xp = x.__array_namespace__() xp.exp(x) but I guess since adding additional args / kwargs is explicitly allowed, the current state is fine. I guess now the only issues I have is the registering of mxp in sys.modules (since calling get_namespace multiple times would overwrite the previously registered module), and that we don't have a way of doing isinstance checks for MArray if the class is not public. The latter can easily be resolved by a class that implements __instancecheck__, so nothing major (if you prefer I can open new issues for those, though) if you prefer I can open new issues for those, though Pull requests would be even better! strict doesn't expose Array, right? Under that assumption the closest to ideal would be this: import marray as ma x = ma.MArray(np.array([1, 2, 3])) ma.exp(x) I think I disagree - this goes back to our discussion at https://github.com/data-apis/array-api/discussions/843#discussioncomment-10714668. Here's my reasoning: marray in itself cannot be a standard-compatible namespace. While it is clear what ma.exp(x) should do, since x has an underlying array namespace, it is not clear what ma.asarray([1]) should do, unless we introduce the concept of a default array backend. And like xarray (https://github.com/data-apis/array-api/issues/698#issuecomment-1800078933), I think it would be confusing to provide a namespace which has lots of functions which are in the standard, but misses some like asarray. For xarray, a dispatching module like xarray.ufuncs is needed, since NamedArrays are not compatible with the standard. But for libraries like marray which are compatible with the standard, and hence can provide __array_namespace__, I think using xp = __array_namespace__ is the idiomatic way to interface. Something that arises from this: in https://github.com/data-apis/array-api/discussions/843#discussioncomment-10714668 I discuss the idea of a pint(dask(marray(sparse))) array. I think __array_namespace__().__name__ should be based on the name of the namespace of the array backend. So something like marray(numpy) rather than just mxp. Perhaps there should be some further discussion of this on the standard-side though - I'm not sure how this should interplay with things like is_numpy_namespace from array-api-compat. To be clear about the problem I see with the current __array_namespace__().__name__: In [1]: import numpy as np In [2]: import array_api_strict as xp In [3]: import marray In [4]: mnp = marray.get_namespace(np) In [5]: mxp = marray.get_namespace(xp) In [6]: xp1 = mnp.arange(2).__array_namespace__() In [7]: xp1.__name__ Out[7]: 'mxp' In [8]: xp1.arange(2).data.__array_namespace__().__name__ Out[8]: 'numpy' In [9]: xp2 = mxp.arange(3).__array_namespace__() In [10]: xp2.__name__ Out[10]: 'mxp' In [11]: xp2.arange(3).data.__array_namespace__().__name__ Out[11]: 'array_api_strict' The modules behave differently since they have different array backends, but have the same name.
GITHUB_ARCHIVE
Is Whole Genome Testing in any way useful for Genealogy? There are several DNA testing companies out there that are excellent for genealogists. These include Ancestry DNA, Family Tree DNA, MyHeritage DNA and 23andMe. These companies provide match lists of possible relatives, links and hints to the family trees on their sites, tools to compare DNA segments, and ability to download your matches or raw data so that you can work with them offline or upload them to other sites (like GEDmatch). From a DNA sample, they can provide autosomal, X-DNA, Y-DNA and mt-DNA information, each of which is useful for genealogists in their own way. These tests sample about 700,000 SNPs out of the approximately 10 million SNPs we have. They were selected because they are the SNPs that vary most between humans. There are almost 3 billion more locations that don't vary among humans. A whole genome test will test all 3 billion positions. It has come down in price and may be obtained from some companies for less than $1000. For example, the company YSEQ does a Whole Genome Sequence, where they say: … but they don't say what it is that makes this "Specifically for Genealogy Researchers". So my question is if there is anything useful to a genealogist, for the purposes of helping to find their ancestors, that a whole genome test can provide that would provide enough added value over and above the standard autosomal, Y-DNA and mtDNA tests, that will make it worth the purchase? i.e. Should I get a whole genome test? If so, how will it help me advance my genealogical research? Would the same hold true for a whole Exome sequencing test? Short answer: Not yet. I don't know how a whole genome test would improve matching. It's not like Y or mtDNA where a single allele can make or break things. I'm not saying it won't change matching, just that I'm not sure how. I can hope that testing areas not currently tested (and types of SNPs and non-SNPs not currently tested) would give us information we didn't know was even possible. We'll have to wait and see. Mostly though, what we need is databases. Even with something like Y-DNA, if you do a big-Y test and no one in the database has gone over 111, then it is as if you did 111 too. So sure, if you have raw data with billions of SNPs, it won't change anything genealogically until there is a database of others having done the same test. And that won't happen until the prices really come down. Then there is ancestry composition. I'd think whole genome would improve that but we still need the databases to get the population data.
STACK_EXCHANGE
Embarked units can defend themselves. Special Ability: River Warlord: Triple gold from Barbarian encampments and pillaging Cities. Embarked units can defend themselves. What is the defensive rating given? Does it depend on the unit? Does it effect bombardment, ships Can anybody provide me nudge / hint to solve D:2 "the haystack" in the stone? ( http://www.scarecrowsfield.com/index.php?task=show&level=puzzle&group=1&left=D&right=2) ? I've looked at it for a long time but I haven't been able to come any closer to finding a solution. So, I gave my autistic cousin an XBox Live card ... I had made sure to go in and change his user profile so there was no correct information in it, but what I didn't know is that he's had a couple of incidents with telling people online where he lives (and in one case, someone came to 'visit'... Oh wow. I thought they added an item store to the game. I had no idea they turned it into Farmville with guns... This is what the tutorial plays like: "This is a gun. It uses bullets. You fire that gun. Not just randomly, you fire it at someone. If you fire a gun at someone, it hurts them. If you fire it enough times, they die." It's as if they think people buy the game expecting merely a hat simulator. I'm looking for old football (soccer) game from around 1990. It was sort-of-championship type of game where you played around 10 matches in a row against harder and harder opponents - it started with Japan, went through USA, Russia, to Argentina and Brazil. When you entered penalty area, it chang... Have had a quick look at Battle of Wesnoth, but not sure if it would be suitable for a light operational/strategic level WW2 wargame I'd like to develop over time as a hobby (think "Panzer General" or "Commander: Europe at War"). Programming language does not matter nor does the platform (I have... I've experienced that killing people in the own team becomes more and more common. Sometimes it's because they want to have the vehicle you're taking, sometimes I guess just for fun (I've even seen someone named "Teamkiller...") or as a revenge for being (possibly unintentionally) team-killed. oh man I just got the best idea for SC2 training. I'll make a document app that treats each word I type as if I'm making a drone, and then I'll have to type OVERLORD one word before getting supply blocked or risk having my speakers blare "SPAWN MORE OVERLORDZZZZZ". I'll never forget about it ever again. I am in the middle of a game where I am the Chinese, so I am obviously going to play aggressively. I have declared war on two city states, and in turn, I have had about 6 more declare war on me. Are city states a threat when they Do city states even leave their What are t... I was just writing an answer to a programming question. It happened to be related to game development, but it was a fairly generic Java-question. Once I've spent 5 minutes or so carefully typing in an answer and clicked "save", I was surprised to see that it had been migrated to "gamedev.stackex... Was wondering what the most powerful beam spell combo is in Magicka. I know of a few strong beam combos but it's hard to tell which is the most powerful in terms of raw damage. My personal favourite is Arcane+Steam+Steam+Lightning+Lightning. It seems to be pretty effective. Has anyone done some... Trying to go to the moderator tool privilege page: Causes an error: Also, while you're fixing it, notice that that page is bugged anyway. I have read that when "Play as Guest" mode is selected, Starcraft II campaigns can be played without achievements being recorded to your profile. Every time I click to the "Play as Guest" button, it tells me to authorize my game client. Then I login. Because game does not allow me to switch to g... I still can't believe they actually rebranded a strip club with Duke Nukem Forever stuff. @badp No. Well, maybe for consoles, but there's going to be a PC version. Ars Technica's preview of the game is... a bit of a let down, actually. I guess I'm not surprised that they're saying the game doesn't look that good, considering how many years it spent in production. Oh right, they complained about sluggish frame rates, but apparently they were playing the Xbox 360 version. Not to knock the consoles, but both the 360 and PS3 are slow compared to a modern PC. I guess MS and Sony forgot the reason why they release new consoles every 5-6 years in their lame attempt to lengthen the console lifecycle with motion control toys. With the M72 Law, when I look through the scope and lock onto an attack helicopter and fire, the helicopter makes an invasive maneuver and I never hit it. It seems like the best way to take down a helicopter is just to line it up without the scope and fire. Any other tips to taking a ch... I'm trying to run the following two sql queries on data.stackexchange.com/gaming : And in both cases ...
OPCFW_CODE
[Beowulf] AMD performance (was 500GB systems) Joshua mora acosta joshua_mora at usa.net Fri Jan 11 04:01:29 PST 2013 AMD should pay you for these wise comments ;) But since this list is about providing feedback, and sharing knowledge, I would like to add something to your comments, and somewhat HW agnostic. When you are running stream benchmark it is an easy way to find out what the memory controllers are capable. More down to the usage of that, it translates for a wide variety of applications in terms of data processing throughput , and therefore into the real application's performance, because data is stored in RAM , fetched into caches, processed by cores and then returned to caches to be finally evicted back to RAM while bringing new chunks of data into cache, until the whole data set is processed. Stream does minimal computation, at most the triad but it really exposes the bottleneck (in negative terms) or the throughput (in positive terms) of the processor and platform (when accounting multiple processors connected by some type of fabric: cHT, QPI, network) when looking at the aggregated memory The main comment I would like to add is with respect to your stream bandwidth results. Looking at your log2 chart, it says that AMD delivers about ~100GB/s on 4P system and on Intel it delivers ~30GB/s on 2P systems. I may be reading wrong in the chart but it should be about 140GB/s with AMD (Interlagos/Abudhabi) with 1600MHz DDR3 memory and about 40GB/s with INTEL (Nehalem/Westmere) with memory at 1333MHz DDR3 and about 75GB/s with Sandybridge with memory at 1600MHz DDR3. In order to achieve such significantly higher memory bandwidth for this specific benchmark and there is where I want people to realize is that the data is used only once. There is a loop to repeat the experiment and average timings but in terms of processing , the data is only used once and then you bring a new chunk of data. In other words, there is no reusage of the data in the "near term". Therefore, you do want to boost the processing by getting rid of the data already processed by evicting it directly from the levels of cache closer to the core directly into RAM and bringing new fresh data from RAM into the caches rather than evicting the data recently processed into caches, wasting precious space to store data you dont need "for the time being". If you bypass the normal mechanism you are improving the amount of new data fetched into caches while storing quickly the crunched data into RAM. In order to do so, you want to use non temporal stores, which bypass the regular process of cache coherence. Many applications behave that way since you have to do a pass through the data and you may access it again (eg. in the next iteration) but after you have processed a bunch of more data (eg. on current iteration), hence preventing the cache to keep that data close. Better to get rid of it and bring it again when needed. If you do so, on those applications that are not cache friendly, which is the opposite to what I just described, you will improve greatly the performance of your applications. Finally, I have done a chart of performance/dollar for a wide range of processor variants, taking as performance both FLOPs and memory bandwidth and assuming equal cost of chassis and amount of memory, dividing the performance by the cost of the processor. I am attaching it to this email. I took the cost of the processors from publicly available information on both AMD and INTEL processors. I know that price varies for each deal but as a fair as possible estimate, I get that Perf/$ is 2X on AMD than on INTEL, regardless of looking at FLOP/s or GB/s, and comparing similar processor models (ie. 8c INTEL vs 16c AMD). You can make the chart by yourself if you know how to compute real FLOPs and real bandwidth. I also did the funny exercise to halve the price of the Intel processors (eg. 50% discount) and then the lines of Perf/USD of Intel went to match the lines of AMD, ie. to become Perf/USD competitive or on par without having to discount on AMD. Best regards, Joshua Mora More information about the Beowulf
OPCFW_CODE
Why does importmulti not support zpub and ypub? As far as I can tell importmulti does not work with zpub/ypub. Why not? FWIW I prefer only to deal with xpubs, but I am curious. The "xpub" format was defined by BIP32. It's a standard that specifies how to derive public keys from master public keys and seeds. Parts of it are widely adopted, some parts aren't. However, it does not say anything about how the keys it generate should be turned into addresses, only the keys themselves. Now, at the time, there was only really one obvious way of turning a key into an addresses: by using its hash in a P2PKH (1...) address. This was implemented by numerous pieces of software, which more often than not treated "importing an xpub" as "importing an xpub and watching all P2PKH addresses for the resulting keys". This made sense, because it was how everyone wanted to use them anyway. Then came along Segwit, which introduced two new common ways of paying to single-key outputs. Wallet software needed a way to "mark" an xpub as being intended to be used for P2WPKH (bc1...) or P2SH-P2WPKH (3...), instead of the traditional P2PKH. As the xpub standard had become interpreted as P2PKH only (rather than an address-agnostic way of describing public keys), something other than xpubs were needed. This is why some people adopted ypub/zpub for this purpose. I believe this is confusing, as it is unclear now what an xpub means, and it is not scalable: we can't keep inventing new xpub-like formats for all types of addresses that may be invented. Especially with the introduction of multisig and more complex constructions, which simply don't fit into a single xpub-like thing (because you'll need to combine multiple of them). For this reason, Bitcoin Core is using (and further developing) an approach called Output Descriptors. These are strings that specify exactly and unambiguously what scripts/addresses are desired, based on the involved public keys. These expressions support xpubs, but only in the original address-neutral meaning - the rest is conveyed using functions on top of them. For example: pkh(xpub.../44'/0'/0'/0/*) would describe the BIP44 addresses derived from a particular xpub (P2PKH). sh(wpkh(xpub.../49'/0'/0/*)) would describe the BIP49 addresses derived from a particular xpub (P2SH-P2WPKH). wsh(multi(2,xpub1.../*,xpub2.../*,xpub3.../*)) represents a 2-of-3 P2WSH-embedded multisig. There are many more features in descriptors, and there is ongoing development. Disclaimer: I'm the author of BIP32. Your work is very much appreciated. I am using the descriptor approach for importing into Bitcoin Core and it is very cool. If anyone wants more info here is a useful link (also written by Pieter) on descriptors: https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md Is the extended key itself or its derived keys actually imported into the wallet of Bitcoin Core? It seems to be the later. In current versions, the descriptors' keys are expanded at import time. This means you need to provide an upper bound for the range of indices used. In a future version you'll be able to import descriptors directly, allowing them to be expanded on the fly as addresses get used. ypub and zpub are not things that are specified in BIPs. They are things that people have decided to use and specify outside of the BIPs process. Furthermore, they are a layer violation. They specify what kind of addresses a public key should be used to create, but key generation and the address type to create from a key are entirely separate things that shouldn't be mixed together. Lastly, Bitcoin Core does not currently support having a public key be for a specific address type. Any public key in Bitcoin Core can be used for all 3 address types and there is no separation of derivation paths or master keys for different address types. zpub seems to be defined on BIP-84, currently with status draft.
STACK_EXCHANGE
iSCSI vs. Fibre Channel We’ve had several posts here about storage virtualization (a.k.a. SANs) and the role that storage virtualization plays in both server and desktop virtualization. We made the decision some time ago to promote iSCSI SAN products rather than fibre channel, primarily because iSCSI uses technologies that most IT professionals are already familiar with – namely Gigabit Ethernet and TCP/IP – whereas fibre channel introduces a whole new fiber optic switching infrastructure into your computing environment, together with the new skills required to manage it. But there are many who maintain that, although a fibre channel SAN infrastructure may be more expensive, and may require a different set of skills to manage, it offers superior performance. So I was particularly interested to run across an article by Greg Shields on techtarget.com entitled “Fibre Channel vs. iSCSI SANs: Who Cares?” I would encourage you to click through and read this article in its entirety, although you may have to register and give up your email address to do so. But here are a couple of tidbits from the article to whet your appetite: iSCSI vs. Fibre Channel: Who cares? The answer: Statistics suggest that it doesn’t really matter…In most real-world scenarios, the performance difference between Fibre Channel and iSCSI SANs is negligible. Partisans will extol the raw performance statistics of their favorite SAN type, but it’s fantastically difficult to translate raw performance specifications into real-world user experience… Performance alone may not be a decisive factor, but a SAN’s ease of administration can be. The management tools and techniques for Fibre Channel and iSCSI storage infrastructures are substantially different…the skills and experience required to run a Fibre Channel storage infrastructure are difficult to come by – often requiring additional consulting support for most implementations to start correctly. On the other hand, iSCSI SANs lean heavily on the existing TCP/IP protocol. If you have network engineers in your environment, they probably possess most of the necessary skills to successfully manage an iSCSI storage infrastructure. So, while I would once again encourage you to read Greg’s post in its entirety (so you can assure yourself that I’m not quoting him out of context), I must say that I find his comments gratifying, because they tend to reinforce our own conclusions: unless you already have a fibre channel SAN infrastructure, there’s no compelling reason not to go with an iSCSI solution, and several reasons in favor of doing so, including cost and simplicity of management. Anybody out there disagree? And, if so, can you tell me why, exactly, you feel that fibre channel is superior? I recently discovered your site publish and have been reading along. There are many regarding weird responses, however in most cases I simply trust the other commenters tend to be composing. With so many nicegreat reviews of this blog, I thought which i would likely additionally connect as well as explain how I really loved scanning this post. And so i consider this may be our first comment: “I can see you’ve built a few genuinely insightful points. Not more and more people would in fact look at this the method that you only do. I am just really amazed there’s much relating to this matter that is exposed so you made it happen therefore properly, with so much class!” There are a few key advantages that come to mind with VMWare ESX. Because it’s able to aggressively allocate hardware resources (CPU, RAM, disk, network), ESX can take advantage of additional bandwidth in three key areas: 1. Increase VM Guest network traffic capacity while consolidating cabling for VM-dedicated NIC’s. In my previous environment, we were using 4 to 6 GigE interfaces for VM Guest traffic, which 2. In a multi-VM host environment, 10GigE will increase throughput for VMotion server-to-server VM migration (in which VMWare copies the active RAM to a file on a second server), which will allow faster re-allocation of resources. This will improve VM host uptime, since VMmotion guest migration is gated on this memory-state data transfer over the network. 3. iSCSI storage, with 10GigE interfaces, will be a powerful, flexible alternative to Fibre Channel storage. We have a great blog on this -please see link: http://blog.dnfcorp.com/ @glihtco – the real question is whether the difference is perceivable in the real world. As Greg Shields says in the quote above, “…it’s fantastically difficult to translate raw performance specifications into real-world user experience…” Isn’t it so that 1G ethernet with iSCSI, with overhead of SCSI in TCP is slower that 1Gb FC ? Compared to 2Gb? 4Gb? – and price of 10Gb Eth vs 8Gb FC (Switches & HBA/NIC ) is now in favour of FC? iSCSI HW initiator 10Gb NIC is quite expensive…..
OPCFW_CODE
Bacula: Unbeatable in HPC and Super Computing Environments. HPC centers are broadly working to modernize their IT infrastructure, embrace the need to correctly back up their data, and meet the quickly arriving needs of tomorrow. Their IT centers face an ongoing challenge to adapt and improve their IT operations to remain flexible and offer the latest performance capabilities. New and different approaches to security, efficiency and performance are needed to achieve these improvements. Bacula offers especially high levels of security and performance in HPC environments when compared to other vendors. “Of those evaluated, Bacula Enterprise was the only product that worked with HPSS out-of-the-box without vendor development, provided multi-user access, had encryption compliant with Federal Information Processing Standards, did not have a capacity-based licensing model, and was available within budget” NASA In accordance with its Open Source pedigree, Bacula Enterprise perfectly supports Posix compliant filesystems, helping you to avoid vendor lock-in. In addition, every filesystem that can be mounted to a Linux or Windows host can be used, including parallel and clustered filesystems such as Lustre or Quobyte. POSIX file systems are the most common storage system in use today, providing a wide range of IO functions for applications to use, including byte-level access. However, with the large number of IO functions comes complexity, both for the application and the file system. Bacula helps HPC users to significantly reduce complexity by being file system-agnostic. Here is a non-exhaustive list of filesystems that Bacula Systems customers use: *requires a Bacula Enterprise module (plugin) As a proven HPC backup and recovery solution of especially high performance, Bacula can handle vast volumes of data with ease. With the increasing need for HPC solutions and further improvements in technology, organizations are turning their attention to areas such as hybrid HPC solutions. Bacula anticipates that technology and innovation improvements in HPC space will increase, especially in specific areas such as Hybrid Cloud, edge compute, container technologies and security approaches. As IT teams work hard to create a balance between on-premises HPC solutions and cloud, Bacula provides a way to protect and recover these entire environments from a single platform. Read the Bacula whitepaper that covers Backup and Recovery considerations for this sector: Top 10 Whitepaper Highlights - IT environment complexity in the research sector - Technical & demanding IT environments - Meeting RPO’s and RTO’s - The need for especially high levels of security - Bare metal recovery - The need to de-risk implementation - Hybrid cloud in the research sector - Stand-alone capabilities and “air-gapping” - Container technologies in the research sector - How NASA benefits from Bacula - Avoiding vendor lock-in
OPCFW_CODE
The biggest topic around the <video> tag is of course the question of baseline codec: which codec can and should become the required codec for anyone implementing <video> tag support. Fortunately, this discussion was held during the panel just ahead of ours. Thus, our panel was able to focus on the achievements of the HTML5 video tag and implementations of it, as well as the challenges still ahead. Unfortunately, the panel was cut short at the conference to only 30 min, so we ended up doing mostly demos of HTML5 video working in different browsers and doing cool things such as working with SVG. The challenges that we identified and that are still ahead to solve are: - annotation support: closed captions, subtitles, time-aligned metadata, and their DOM exposure - track selection: how to select between alternate audio tracks, alternate annotation tracks, based on e.g. language, or accessibility requirements; what would the content negotiation protocol look like - how to support live streaming - how to support in-browser a/v capture - how to support live video communication (skype-style) - how to support video playlists - how to support basic video editing functionality - what would a decent media server for html5 video look like; what capabilities would it have Here are the slides we made for the working group. Download PDF: Open Video Conference: HML5 and video Panel 10 thoughts on “Open Video Conference Working Group: HTML5 and” Oh irony, the slides require proprietary Flash to be viewed… Hub, you’re totally right. I have added a PDF for download. Unfortunately there is no slideshare with a non-flash solution at this point afaik. “This meant we had three browser vendors and their tag developers present” Who was the third browser vendor and video tag developer? I see Opera and Apple mentioned. Jason, Firefox of course. 🙂 S9 (S6 under the skin) is awesome: or the old S5 if you prefer: See here for more info: Technically there were no Firefox video element developers present – we were there in spirit and supporting though 🙂 Chris Double was supposed to be there, but couldn’t make it in the end because he had to make sure the <video> tag support in Firefox 3.5 worked. Congratulations to Chris on this awesome achievement!!! Since he wasn’t available, we had a combination of Xiph and Annodex developers that know roughly how Ogg Theora support works in Firefox. I counted that as covering the Firefox side of things together with the moral support that we received in spirit. 🙂 I’m delighted to see such a fantastic, comprehensive list! Hopefully there’ll be DOM interfaces for all these features some day; that would be superb! While the code itself is somewhat above my head, the discussion of this specification recalls my recent videoblog/blogpost in which I propose… A Rubric for Open Source Cinema (beta) 1. Identification of Objects in the Frame 2. Universal Editing Timeline Metadata 3. Timecoded Text Transcription I found your blog because rektide actually linked me here once he read/watched the full post: There’s obviously some criteria missing — these were the ones I came up with in my initial reaction to the http://www.opensourcecinema.org project after watching the film “RiP: a remix manifesto” Would be thrilled to have your insights on the ideas I am discussing. Up until now my perspective is admittedly coming from a more artistic angle. Comments are closed.
OPCFW_CODE
DOC: updates readme for scipy conference Some of these are a bit old. I'd like to at least get pandas, dask, and NumPy on their latest versions. I'll run through the materials now with the latest. Do we have a preference for jupyterlab vs. notebook? No worries! If we're modifying the dependencies, it might be nice to trim them down a bit, and make them less strict (e.g. numpy >= 1.9.0) where we can. I haven't used Jupyterlab yet, but it looks fun and I'd be down to give it a whirl The main issue I've had with J-lab is that it doesn't provide jquery like the notebook, so the formatted cells with the exercise headers will have to be modified. I have a slight preference for creating a new environment and pinning exact dependencies, just to minimize cross-version incompatibilities. @jorisvandenbossche do you have thoughts? On Sun, Jun 17, 2018 at 10:00 PM, Dillon Niederhut<EMAIL_ADDRESS> wrote: No worries! If we're modifying the dependencies, it might be nice to trim them down a bit, and make them less strict (e.g. numpy >= 1.9.0) where we can. I haven't used Jupyterlab yet, but it looks fun and I'd be down to give it a whirl — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/deniederhut/Pandas-Tutorial-SciPyConf-2018/pull/1#issuecomment-397930475, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQHIvTaT5rkHgZG1OwsKASqHB6plfutks5t9xfWgaJpZM4Uqjqz . The main issue I've had with J-lab is that it doesn't provide jquery like the notebook, so the formatted cells with the exercise headers will have to be modified. For that reason I am for now still using the notebook (and not jupyterlab) for tutorials. There are also some other packages that do not yet fully work in jupyterlab (like matplotlib interactive notebook) I have a slight preference for creating a new environment and pinning exact dependencies No strong preference here, having minimum versions (even when it is the latest) is fine for me as well. But if we go for new environment, I think we should more strongly recommend conda. Of course, I haven't found time to actually update things, so happy to merge whatever is required for the organizers :) On Mon, Jun 18, 2018 at 6:42 AM, Joris Van den Bossche < <EMAIL_ADDRESS>wrote: The main issue I've had with J-lab is that it doesn't provide jquery like the notebook, so the formatted cells with the exercise headers will have to be modified. For that reason I am for now still using the notebook (and not jupyterlab) for tutorials. There are also some other packages that do not yet fully work in jupyterlab (like matplotlib interactive notebook) I have a slight preference for creating a new environment and pinning exact dependencies No strong preference here, having minimum versions (even when it is the latest) is fine for me as well. But if we go for new environment, I think we should more strongly recommend conda. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/deniederhut/Pandas-Tutorial-SciPyConf-2018/pull/1#issuecomment-398026916, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQHIqBTxZAcDnUzjj98I_z_P_s-XYneks5t95I4gaJpZM4Uqjqz . Haha okay, we'll merge this now, but I would keep in mind that the odds anyone does their installation for the tutorial three weeks in advance is essentially 0%. There is still time to tinker with the instructions and dependencies.
GITHUB_ARCHIVE
How to restrict API endpoint access to certain clients? I'm building an API using the Django Rest Framework. I've looked at a whole bunch of documentation, however I can't seem to answer this: How can I restrict my API such that only my iOS client can register users / log them in? I understand that I can use OAuth2 or Token Authentication for additional endpoints. But for unauthenticated requests, is there any way of restricting them? There's no truly secure way to guarantee requests are coming from a specific device. Checking headers seems like the best way, as mentioned by @dukebody, but should be considered as a "good enough" solution for most users. I'd also question why you want to do this. APIs generally shouldn't be restricted to certain devices because it makes them less extensible. Moreover, REST/HTTP services should return the same result regardless of the client device; otherwise, you will cause headaches when dealing with caches and proxies between clients and your service. If you are trying to format content specifically for iOS, you'd be better off adding a specific parameter like ?format=ios without checking headers, then just make sure your iOS client uses that param. That would be more in the spirit of REST and make things easier to cache as well as test. The original idea was to ensure, requests from out of walled garden sources are ignored. However, the nature of REST and the service itself shouldn't restrict as such. You're right here. Thanks for the thoughtful reply. I also encounter this issue.I would like to provide some of my thought. My team would need to support some APIs with heavy operation and it would be open to unauthenticated users which is design by business logic. That's why we need to restrict api requests to our app clients. The API call is stateless and irrelative with caching and proxies. In the other hand, some malicious attack like CSRF, you should also provide some additional protection on you API to prevent request sending from untrusted way. There are several mechanism we considered. Using HTTP header This is untrusted and very easy to crack. Use one static random generated API Key Very common and easy-implementation way. Server generated one static random string as key and client must carry when sending request. If you have to support web, this would be leak by web console.But if you only support app client and restrict your API connection with HTTPs. This should be safe enough. Dynamic change API key with AES crypto algorithm To prevent MITM or static API key is leak, I proposed to use AES crypto algorithm and encrypt current timestamp. When server receive, decrypt and check whether the request is valid or not. You can also append some string as salt to make the mechanism harder to brute force attack. You can do as much effort to make it harder to crack, but it would never be absolutely 100% safe. Hackers can still reverse engineer your app to see how the encryption works. All you can do is making it harder. This is my propose and hope it could inspire you. If you have any other better solutions or find some bug in my proposal, please let me know. Restrict the views to the user agent of the iOS client, checking the headers. See https://stackoverflow.com/a/4617648/356729
STACK_EXCHANGE
Add fallback syntax highlighter using Pygments autodetection From<EMAIL_ADDRESS>on 2014-09-13T07:25:15Z Spyder supports syntax highlighting of some files through Pygments. However, each file type must be manually added to Spyder as seen here: https://bitbucket.org/spyder-ide/spyderlib/src/4954d59d388e09b136b324141d595a4104266a1c/spyderlib/widgets/sourcecode/syntaxhighlighters.py?at=default#cl-837 This means that Spyder does not make use of all of the syntax highlighters provided by Pygments (and there are a lot: http://pygments.org/docs/lexers/ ). Adding support for any additional language requires changes to Spyder's source. This also means that, if I create a Pygments plugin ( http://pygments.org/docs/plugins/ ), there is no way to make Spyder use it without manually editing Spyder's code. Instead, I would like to request that Spyder have a fallback highlighter. If no other highlighters can be used, maybe Spyder should use Pygments' autodetection functions ( http://pygments.org/docs/api/; I think get_lexer_for_filename would be enough; guess_lexer would probably be too slow). This would add immediate support for all file types supported by Pygments, including lexer plugins. To be completely useful, I would also suggest an option to disable syntax highlighting. That way, if the autodetection guesses the wrong file type, highlighting can be disabled on that file, and then you're no worse off than the current Spyder, which would not have highlighted the file at all. The ability to select a syntax highlighting scheme would be even better, but not strictly necessary. And finally, a GUI for making a simple lexer plugin through Spyder would round out the features, but that's much farther in the future. FYI, my current setup: Spyder Version: 2.3.0 Python Version: 3.4.1 Qt Version : 4.8.5, PyQt4 (API v2) 4.10.4 on Linux pyflakes >=0.6.0: None (NOK) pep8 >=0.6 : None (NOK) IPython >=0.13 : 2.2.0 (OK) pygments >=1.6 : 1.6 (OK) sphinx >=0.6.6 : None (NOK) psutil >=0.3 : None (NOK) rope >=0.9.2 : None (NOK) matplotlib >=1.0: None (NOK) sympy >=0.7.0 : None (NOK) pylint >=0.25 : None (NOK) Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1966 From<EMAIL_ADDRESS>on 2014-09-13T05:26:16Z Oops; the Pygments API link should be http://pygments.org/docs/api/ From ccordoba12 on 2014-09-16T12:21:17Z This is a very well thought proposal and something I'd like to see in a future version. Do I smell a pull request coming from you? Otherwise, you'll have to wait at least until our 2.5 version because plans for 2.4 are almost settled and well under way :) Status: HelpNeeded Labels: Cat-Editor From<EMAIL_ADDRESS>on 2014-09-16T17:37:25Z I would love to, but it's not going to come quick. A few weeks away, at least. If anyone else wants to tackle it first, I would be grateful :). From ccordoba12 on 2015-01-24T14:06:09Z This issue was fixed by revision 4dc76574a0d1 Status: Fixed Labels: MS-v2.4 From ccordoba12 on 2014-09-17T03:42:50Z Don't worry, a few weeks is just fine :) Take a look at our Roadmap to see the release dates for future versions: https://bitbucket.org/spyder-ide/spyderlib/wiki/Roadmap
GITHUB_ARCHIVE
Friday 27th September 2019 Featuring the stars of Monty Python's Flying Circus and The Goodies, and pre-dating both comedies, the landmark 1967 sketch shows Do Not Adjust Your Set and At Last The 1948 Show have been fully restored for new DVD releases. We have three bundles of both titles to give away. John Cleese will make his stage writing debut with Bang Bang!, a new adaptation of hit farce Monsieur Chasse by Georges Feydeau.Alex Wood, What's On Stage, 16th October 2019 To celebrate half a century of Monty Python's Flying Circus, Genome indulges in a little light quizzing - how well do you know your Pythons?Andrew Martin, BBC, 5th October 2019 A new archive of photos and documents has been uncovered which illustrates rarely seen moments from the making of Monty Python, as the ground-breaking comedy show celebrates its 50th anniversary.BBC, 5th October 2019 In this rare glimpse inside the BBC archives, we reveal the exasperated internal memos, the furious letters from wing commanders - and David Frost's bid to bring them down.Mark Lawson, The Guardian, 4th October 2019 You've all heard this one: four Yorkshiremen sit round a restaurant table and try to outdo each other with tales of how they had it tough when they were but lads. It's one of the most famous sketches to come from the Monty Python team, and has been restaged several times, including the album Monty Python Live at Drury Lane and the Amnesty International charity show and film The Secret Policeman's Ball. But in fact it's not a Python sketch at all. It first appeared on TV on At Last the 1948 Show.Gary Couzens, The Digital Fix, 16th September 2019 Monty Python at 50: The Self-Abasement Tapes is made up of excised sketches from the television show, presented for the first time by Python member Michael Palin. Television now is a lot swearier and shoutier than it was 50 years ago, but I bet it still wouldn't start a Python tribute with the sketch that opened this one: a report from the annual conference of the Fat Ignorant Bastards Party of the USA, whose leader has just become president. "The cult is certainly booming," Eric Idle said in classic old-style Panorama manner. There followed a court sketch and a school sketch, both subjects dear to Python hearts, as well as the fine country parody song I'm So Worried, exquisitely performed by Terry Jones, with worries that ranged from the Middle East to Heathrow's baggage delivery system and the state of current TV. Palin's linking device, as if he were excavating the material from sewers beneath the Edgware Road while being ironic about that road, its shops and owners, was apt and ingenious.Gillian Reynolds, The Sunday Times, 8th September 2019
OPCFW_CODE
Markov Chain with two components I am trying to understand a question with the following Markov Chain: As can be seen, the chain consists of two components. If I start at state 1, I understand that the steady-state probability of being in state 3 for example is zero, because all states 1,2,3,4 are transient. But what I do not understand is that is it possible to consider the second component as a separate Markov chain? And would it be correct to say that the limiting probabilities of the second chain considered separately exist? For example, if I start at state 5, then can we say that the steady-state probabilities of any of the states in the right Markov chain exist and are positive? Yes, you can. Actually this Markov chain is reducible, with two communicating classes (as you have correctly observed), $C_1=\{1,2,3,4\}$, which is not closed and therefore any stationary distribution assigns zero probability to it and $C_2=\{5,6,7\}$ which is closed. As stated for example in this answer, Every stationary distribution of a Markov chain is concentrated on the closed communicating classes. In general the following holds Theorem: Every Markov Chain with a finite state space has a unique stationary distribution unless the chain has two or more closed communicating classes. Note: If there are two or more communicating classes but only one closed then the stationary distribution is unique and concentrated only on the closed class. So, here you can treat the second class as a separate chain but you do not need to. No matter where you start you can calculate the steady-state probabilities and they will be concentrated on the class $C_2$. Yeah, I see that little block in bottom-right corner of transition matrix :) $$\begin{pmatrix} 0 &\frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0\ \frac{1}{2}& 0 & 0 & \frac{1}{2} & 0 & 0 & 0\ \frac{1}{3} &\frac{1}{3} & \frac{1}{3} & 0 & 0 & 0 & 0\ 0 &\frac{1}{2}& 0 & 0 & \frac{1}{2} & 0 & 0\ 0 & 0 & 0 & 0 & 0 & 1 & 0\ 0 & 0 & 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2}\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}$$ @Stef Thanks so much for the answer. One more thing that I am confused about. I read that a Markov Chain can only be considered to have a limiting distribution (and are ergodic) if "all" of its states are irreducible and non-transient, so when we obtain the steady-state probabilities by solving the simultaneous equations of the system as a whole, aren't they a limiting distribution? Or is there a difference between having a limiting distribution and being ergodic? Can we have a limiting distribution without the chain being ergodic? When studying long-run behaviors we focus only on the recurrent classes. Limiting probabilities and stationary are different (but limiting are a subset of stationary). This is for the reason that for example the sequence ${0,1,0,1,0,1,\ldots}$ does not have a limit, but spends $1/2$ of time in state $0$ and $1/2$ of the time in $1$. So, to return to your question, the class $C_2$ is ergodic etc. but not the chain as a whole. Depending on the definition you can say that the limiting distribution is $0$ for states in $C_1$ but is better to say that you focus on $C_2$.
STACK_EXCHANGE
In expert systems, a form of problem solving in which a program tries alternative solutions in an attempt to find the answer. The various alternatives can be viewed as branches on a tree: Backtracking is the program’s ability to follow one branch and, if it reached the end without finding what it seeks, to back up and try another branch. A portion (called sector) of a disk that cannot be used because of bad media. During disk formatting, the operating system identifies any bad sectors on the disk and marks them so they will not be used in future. If a sector that already contains data becomes damaged, you will need special software to recover the data. Almost all hard disks have sectors that are damaged during the manufacturing process, but these are usually replaced with spare sectors at the factory, a process that is known as low level formatting. By the time the disk is shipped, it should be free of bad sectors. However, due to aging and depending on media quality, new bad sectors may form as time goes by. An electronic circuit that passes signals that are within a certain frequency range (band) but blocks or attenuates signals both above and below the band. Refer to the maximum amount of data that can be transmitted within a channel in a given time. Depending on the transfer and I/O type, bandwidth is usually expressed in terms of bit (byte) per second or Hertz. Commonly, a reference to the speed at which a modem can transmit data. Often incorrectly assumed to indicate the number of bits per second (bps) transmitted, baud rate actually measures the number of events, or signal changes, that occur in 1 second. Because one event can actually encode more than 1 bit in high-speed digital communications, baud rate and bits per second are not always synonymous, and the later is the more accurate term to apply to modems. For example, a so-called 9600-baud modem that encodes 4 bits per event actually operates at 24—baud but transmits 9600 bits per second (2400 events times 4 bits per event) and thus should be called a 9600-bps modem. Basic input output system. In most case, it is just a built-in software that stored in a ROM (Read only Memory) or EPROM (Electrical Programmable ROM) chip. Different devices or PCs will have different BIOS, but their purposes are similar, that is to provide some low level interface instructions that allow input and output devices to communicate with the main systems or PCs. In reference to video, a verb meaning to not show or not display an image on part or the entire screen. With computers, a term sometimes used to describe the character entered by pressing the spacebar. A search of data in memory or on a storage device with no foreknowledge as to the data’s order or location. Having to do with logical (true, false) values. Many languages directly support a Boolean data type, with predefined values for true and false; others use integer data types to implement Boolean values, usually (although not always) with 0 equaling false and “not 0” equaling true. A temporary or extra storage space (RAM or hard disk space) to keep data standby and ready for use to reduce waiting time. 1 byte is equal to 8 Bits, binaries that used to represent a text character, an image pixel or any data. KB (kilobyte) = 1024 byte 1 MB (megabyte) = 1024 KB 1 GB (gigabyte) = 1024 MB 1 TB (terabyte) = 1024 GB
OPCFW_CODE
- External reference: - External reference: - External reference: - External reference: A hashtag meant to start a discussion on what we actually want to get done and how estimating are actually helping. There are two aspects of estimations that appear to be discussed here: - whether estimation help or not predicting the time the work will take. Some argue that just counting user stories is enough and that the time spent in estimation stuff is mostly waste, - provided that estimation actually bring prediction value, whether or not people tend to forget that an estimation should never be a target (estimer != s’engager). In that case, it might be better to avoid a supposedly useful technique due to the abuses that are correlated with it. Estimation should not be a final objective. If your team is doing estimations and it believes it helps it deliver value, then it is ok for the NoEstimate state of mind. If on the other hand, estimations become targets (see estimer != s’engager), then you are likely subject to measure-objective inversion and should revise your method. Estimation should follow a scientific method, using bayesian thinking. - estimation are hypotheses, - you do some works - you measure the time spent, - you learn whether the hypothesis was correct or not, Estimation is a system 1 process, it needs some experiences. One bad habit around estimates that needs to be fought against is estimation negotiation (see scope creep). It is also sometimes used (even unconsciously) to “shame” people and incentivize them to work extra time to meet their estimations. Some people argue that those are bad estimate, not actually estimation. Somehow, this is a straw man fallacy1 and those actually already have the NoEstimate state of mind, because it is not at all about not doing estimation, but rather making sure that people understood that it is a instrumental objective and in that sense should be challenged with regard to whether it helps fulfilling the final objective or not. No estimation is not a method or a set of rules and principles. It is more a way to have people put estimation into question and start a discussion toward a more ethical way of working. It is in reaction to bad management, supposedly like any other common method like agility or lean. The idea is not to say that people should stop doing estimations, but that for some guys, it did not prove itself and then they just stopped doing it and they did good afterwards. It is a way to tell this and start a discussion about that. Estimation have become a deontological value, considered good by definition. NoEstimate aims to suggest a change in the state of mind of considering estimations good by default and rather suppose that the most sensible hypothesis is rather that estimation are wrong and consider that using them by default is like reversing the burden of proof. They warmly suggest to work with the data, to make some statistical analysis and try to find out whether estimation help predicting or not2. Actually, Vasco Duarte supposedly found out after playing with data that user story number that a better work predictor than estimation and story points where in his situation. At several occasion, he indicates that #NoEstimate is based on the Build, Measure Learn supposedly from lean. It is also the realisation that we keep trying to estimate, while there is little evidence showing that it provides value. The author of the hashtag himself acknowledges that the title is a click bait and explains that it is often the case that people don’t answer to more nuanced discussions. I just need X yeah and can you give me a rough estimate for when that would be available okay I can tell you when we can start working on it and I can tell you how long similar functionality has taken in the past I don’t need to get five developers in the room do or argue with each other before I give you a number so look at the data and give it the number to you To me, vasco Duarto tells that when being asked “how long will this task take?”, he can answer how long a similar task did take in the past, but putting five developers in the room does not help much answering this question. if we remember it it’s important and if we don’t remember it it’s not important you know what backlogs are they are a mental disease that prevents us from forgetting bad ideas I partially disagree with this one3. People that work together long enough end up slicing stories of the same size, being a possible reason why the number of stories becomes a good estimator for work done4. (https://youtu.be/c1gXaAO0JRY) Notes linking here - #ModernAgileShow 25 | Interview with Vasco Duarte, #NoEstimates - YouTube - #NoEstimates interview with the Runtastic app team - agility with Allen Holub - counter intuitive names - estimation priests - estimer vs chercher un prédicteur - micro promesses vs macro promesses - misconceptions about scrum - natural software development using #NoEstimates and variable length sprints - rhétorique de la contemplation - Scrum Guide 2020 - #NoEstimate - Scrum Master Toolbox Podcast: Agile storytelling from the trenches: BONUS: The Agile Wire hosts interview Vasco Duarte on #NoEstimates - Se passer des estimations avec le #NoEstimates - step by step journey to #NoEstimates - tragedy of estimations in softwares - Vasco Duarte Most likely caused by the click bait name #NoEstimate that actually is about something much more nuanced↩︎ the data tells me more about the future than the estimates tell me about the future — a guy at runtastic (https://scrummastertoolbox.libsyn.com/bonus-noestimates-interview-with-the-runtastic-app-team) To me, this one is more about the product owner not being appropriately engaged and trying to remember all the things. If per reflected, per would naturally remove obsolete stuff. Also, using some kind of maybe list would help keep track of what is current and what is just some idea for later time. Because if you don’t capture it in a trusted system, chances are you will remember it anyway, even though it is not important. ↩︎ Saving time on estimations If the Development Team delivers 6-10 small stories during a Sprint, it is very likely that those are approximately equal in size. This means that over time the Development Team will not have to estimate each story individually, just calculate the number of them.
OPCFW_CODE
Vera - Fibaro FGRM222 blind control module not working Hello I have a Fibaro FGRM222 blind control module connected to my curtains. In the HA-Bridge in the list of Vera devices in the Category column, it does state that this Fibaro FGRM222 device is "Window covering". I pressed the Generate Bridge device button and added it to the Bridge. However the curtains do not open or close. Below are the codes generated by the HA-Bridge; On: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&serviceId=urn:upnp-org:serviceId:SwitchPower1&action=SetTarget&newTargetValue=1&DeviceNum=75 Off: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&serviceId=urn:upnp-org:serviceId:SwitchPower1&action=SetTarget&newTargetValue=0&DeviceNum=75 If I run these I can hear the Fibaro blind control module click. But the curtains do not actually open or close. So I think the code is not quite right ? I have also raised a support ticket with Vera to see what they say. I am using a Vera Edge UI7 with beta firmware 1.7.2348. Thanks Actually I think this is a bug in Vera. If I make sure the curtains are open first. However once the curtains have closed if I run this command to open them: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&serviceId=urn:upnp-org:serviceId:SwitchPower1&action=SetTarget&newTargetValue=1&DeviceNum=75 Nothing happens I do not hear the relay click and the curtains do no open. I also tried this command instead: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&serviceId=urn:upnp-org:serviceId:SwitchPower1&action=SetTarget&newLoadLevelTarget=100&DeviceNum=75 When using that command I do hear the relay click, but still the curtains do not open. I have also had problems in the past with the Fibaro blind control module and native Vera scenes, for example in the advanced editor of a scene you can set your action to be "togglestate" this does not work either and behaves in a similar way to what I am seeing with HA-Bridge. Where it will close the curtains but not toggle them properly and does not open the curtains. I did report it as a bug to Vera but they never fixed it. This may not be a vera bug. The ha-bridge is simplistic in it's url generation for vera. It only has switches and scenes. There may be a better url luup request for your blinds by it's type. Take a look here to see if there is a better one http://wiki.micasaverde.com/index.php/Main_Page. Also, read the micasaverde forums, http://forum.micasaverde.com/index.php/topic,31920.0.html, for the ha-bridge as someone may already have it. OK thanks will do. I will also wait and see what Vera support says about the correct URLs. But I know for 100% that there are some bugs in Vera for this Fibaro blind control module specifically their "togglestate" command does not work properly. I was using it to assign a single button on to an Aeon Minimote remote control to open and close the curtains it use to work in Vera UI5 but they broke it in UI7 and never fixed it. Yes, understand. If you could always take a look at the luup code for the fibaro and debug it for them... lol These command URLs work! Open: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&DeviceNum=75&serviceId=urn:upnp-org:serviceId:Dimming1&action=SetLoadLevelTarget&newLoadlevelTarget=100 Close: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&DeviceNum=75&serviceId=urn:upnp-org:serviceId:Dimming1&action=SetLoadLevelTarget&newLoadlevelTarget=0 Although I am not sure about "dimming" yet to say open the curtains to 50%. I have updated the HA-Bridge device with these different commands. With Alexa if I say "Turn On Curtains" they open and "Turn Off Curtains" they close. However it would be much better if I could just say "Open Curtains" or "Close Curtains" ?? Thanks "Dimming" works, your HA-Bridge automatically populated the Dim URL field with this code: http://<IP_ADDRESS>:3480/data_request?id=action&output_format=json&DeviceNum=75&serviceId=urn:upnp-org:serviceId:Dimming1&action=SetLoadLevelTarget&newLoadlevelTarget=${intensity.percent} If I say Alexa "Dim Curtains 50 percent" the curtains go to 50% opened. Again would be better to say Alexa "Open curtains 50%" Thanks Glad its working!
GITHUB_ARCHIVE
How to Graph Short-Run Average Cost? I've learnt to roughly draw graphs of various functions like isoquants of Cobb Douglas function, i.e., $k=√q/L$. Here first derivative is negative so it's downward sloping and second derivative is positive so convex to origin. Now if Short-Run cost function is $C = (w/k)q^2 + (rk)$ then average cost is $AVC= (w/k)q +(rk)/q$. First derivative is $(w/k)-(rk)/q^2$ but how do I know if it's positive or negative? Hint: solve for $q$ in $(w/k)-(rk)/q^2>0$ to get the values of $q$ for which $AVC$ is upward sloping, and solve for $q$ in $(w/k)-(rk)/q^2<0$ for values of $q$ that correspond to the downward sloping part of the $AVC$ curve. Okay so slope changes at $ q = √(r/w)*K $ . Now the original graph shows it's oblique asymptotic. How do I find that? $\newcommand{\fone}{\color{red}{f_1(q)}}$ $\newcommand{\ftwo}{\color{blue}{f_2(q)}}$ For the sake of simplicity, call $$ f(q) = \frac{w}{k}q + \frac{rk}{q} = rk\left(\underbrace{\frac{1}{q}}_{\fone} + \underbrace{\frac{w}{rk^2}q}_{\ftwo} \right) = rk (\fone + \ftwo) $$ where I have factored $rk$ out of the expression. Now you want to understand each term separately: $\fone = 1/q$ This term is drops as $q$ increases, and diverges when $q$ is small. $\ftwo = \alpha q$, with $\alpha = w/rk^2$ This is a linear term with slope $\alpha$: it is small for small $q$ and large for large $q$. Combined In this particular case the function one of the terms grows while the other shrinks. So in extreme cases only one matter. The question is where is the point in which one becomes more relevant than the other. If you notice above I always use the expressions small and large, but these are relative words. You can actually find a value $q^*$ at which these two terms are equal, and this defines in which each term dominates. So, if $q < q^*$ this is what I mean by small $q$ and therefore $\fone$ dominates. If, on the other hand $q > q^*$ $f$ will be dominated by $\ftwo$. To find $q^*$ we make \begin{eqnarray} \fone &=& \ftwo \\ \frac{1}{q} &=& \alpha q \\ q^* &=& \alpha^{-1/2} \end{eqnarray} With this in mind, below there's a graph for $\alpha = 1$
STACK_EXCHANGE
package interval import ( "sort" "time" tu "github.com/grokify/simplego/time/timeutil" ) type XoxPoint struct { Time time.Time TimeMonthAgo time.Time TimeQuarterAgo time.Time TimeYearAgo time.Time Value int64 YOldValue int64 QOldValue int64 MOldValue int64 YNowValue int64 QNowValue int64 MNowValue int64 MYAgoValue int64 MQAgoValue int64 MMAgoValue int64 AggregateValue int64 YoY float64 QoQ float64 MoM float64 YoYAggregate float64 QoQAggregate float64 MoMAggregate float64 } type YoYQoQGrowth struct { DateMap map[string]XoxPoint YTD int64 QTD int64 } func NewYoYQoQGrowth(set DataSeriesSet) (YoYQoQGrowth, error) { yoy := YoYQoQGrowth{DateMap: map[string]XoxPoint{}} seriesNames := set.SeriesNamesSorted() for _, seriesName := range seriesNames { if seriesName == set.AllSeriesName { continue } outputDataSeries, err := set.GetDataSeries(seriesName, Output) if err != nil { return yoy, err } outputItems := outputDataSeries.ItemsSorted() aggregateDataSeries, err := set.GetDataSeries(seriesName, OutputAggregate) if err != nil { return yoy, err } aggregateItems := aggregateDataSeries.ItemsSorted() for j, item := range outputItems { aggregateItem := aggregateItems[j] point := XoxPoint{ Time: item.Time, Value: item.Value, AggregateValue: aggregateItem.Value, YoY: 0.0, QoQ: 0.0, } key := item.Time.Format(time.RFC3339) if existingPoint, ok := yoy.DateMap[key]; ok { trap := false if existingPoint.Value > 0 && point.Value > 0 { trap = false } point.Value += existingPoint.Value point.AggregateValue += existingPoint.AggregateValue yoy.DateMap[key] = point if trap { panic("GOT") } } else { yoy.DateMap[key] = point } } } for key, point := range yoy.DateMap { yearAgo := tu.PrevQuarters(point.Time, 4) yearKey := yearAgo.Format(time.RFC3339) quarterAgo := tu.PrevQuarter(point.Time) quarterKey := quarterAgo.Format(time.RFC3339) if yearPoint, ok := yoy.DateMap[yearKey]; ok { if yearPoint.Value > 0 { point.YoY = (float64(point.Value) - float64(yearPoint.Value)) / float64(yearPoint.Value) point.YoYAggregate = (float64(point.AggregateValue) - float64(yearPoint.AggregateValue)) / float64(yearPoint.AggregateValue) } } if quarterPoint, ok := yoy.DateMap[quarterKey]; ok { if quarterPoint.Value > 0 { point.QoQ = (float64(point.Value) - float64(quarterPoint.Value)) / float64(quarterPoint.Value) point.QoQAggregate = (float64(point.AggregateValue) - float64(quarterPoint.AggregateValue)) / float64(quarterPoint.AggregateValue) } } yoy.DateMap[key] = point } yoy = AddYtdAndQtd(yoy) return yoy, nil } func AddYtdAndQtd(yoy YoYQoQGrowth) YoYQoQGrowth { ytd := int64(0) qtd := int64(0) now := time.Now() qt := tu.QuarterStart(now) yr := tu.YearStart(now) for _, point := range yoy.DateMap { if tu.IsGreaterThan(point.Time, qt, true) { qtd += point.Value } if tu.IsGreaterThan(point.Time, yr, true) { ytd += point.Value } } yoy.YTD = ytd yoy.QTD = qtd return yoy } func (yoy *YoYQoQGrowth) ItemsSorted() []XoxPoint { keys := []string{} for key := range yoy.DateMap { keys = append(keys, key) } sort.Strings(keys) points := []XoxPoint{} for _, key := range keys { if point, ok := yoy.DateMap[key]; ok { points = append(points, point) } } return points }
STACK_EDU
Please contact your support team if you have a question or need assistance for any Rackspace products, services, or articles. I wanted to give a basic introduction to IP's and subnet's. An example of IP’s and subnets: Example of an IP - 192.168.10.245 This is how an IP address is formed in a binary perspective. There are 4 octets in an IP address totalling one number each. That number that is totalled is one of the number between the dots and it will equal between 0 - 255. An octet is 8 bits of either 1 or 0 that’s why it’s called an octet. An octet made up of 10011010 actually means the following: If you add the totals of all the 1 columns together you get: 128+16+8+2 = 154 So if this was the first octet of 4 in an IP address it would read: How subnet masks work Anything with a 1 on the subnet mask means the corresponding digit in the IP address is fixed and cant change. Anything with a 0 can be altered on the IP address to give you different IP's. Keep in mind every device usually needs a different IP address. Often subnets can be referred to as a / and then a number after an IP address. The first subnet example above could be shown as: which means the last octet is all 0’s in the subnet mask. This is a brief intro to help understand IP's and subnet masks so please ask if you have any questions. Users might probably have heard the term subnetting but don’t know what it is all about. It is separating a large system into smaller systems. The word means subnetwork, that means a large network separated into smaller networks named subnets. There is an IP address that is quantified to classify the subnet, and another one is utilized to recognize a broadcast address inside the subnet. For those users who don’t know how to subnet their networks, here we have a solution for you.1- As first step find the option of Advanced Subnet Calculator, open it and shift to the Classful Subnet Calculator tab. 2- Add the IP address of your network to break down into further nets. 3- You can also alter the number of hosts you want per subnet and tap the option of generate subnets. 4- At this stage you will be given the number of subnets that you picked.5- Below the option of Host Range, you will see the first and last valid host address. Here you will see the broadcast address for that precise subnet.6- Now beneath the subnet will see the network ID for that subnet. 7- In case you want to copy the addresses of the subnets then press Copy Subnets. In case of importing the addresses to an Excel or any other file, you can go to the File and tap the Export icon.
OPCFW_CODE
By now, it’s no secret how much API documentation matters to the overall API development process. Given that an API lacks a very visual interface, it’s the API’s docs that serve that purpose. They shouldn’t be treated as mere user manuals for the product they accompany. They’re a chance for the doc’s consumers to engage with the API and get a working idea of how it will behave. That said, the process of creating API documentation is rarely a snooze. Those who want to release excellent docs have to overcome particular challenges first. Here’s a list of the six most common challenges in API documentation—plus some tips on how you can face them head-on as a developer. 1. Realizing Just How Important API Docs Are This may seem like common knowledge, but the fact is that some developers still don’t give enough focus to API documentation. Perhaps it’s because they don’t see it as integral to the API development process in the same way that preliminary coding is. But all the hard work spent on API design will be for naught if the docs don’t wholly reflect what the API can do. So before anything, condition yourself to think of API docs as a priority—never an afterthought. This is the mindset that will drive the creation of great docs and, consequently, faster API adoption. 2. Adapting to New API Documentation Technologies Current API technologies allow you to do so much more with your documentation than making a simple PDF. But that also means you have a bit of a learning curve to adjust to. At first, you may encounter difficulty while integrating multiple web services and while handling the different programming languages used for designing APIs. Creating hosted API documentation and using a flexible, thorough documentation toolset may be the answer to this. For sure, doing these will make the learning curve a little less steep. 3. Being Precise, Yet Thorough about the Workings of the API Making your API’s documentation will be a constant balancing act on your part. On the one hand, you’ll want to be extensive in your coverage of the API. You’ll want to cover all the details, from the endpoint to endpoint. But on the other hand, you could turn off potential users of the docs if they get nothing but information overload. Addressing this challenge will take collaboration, feedback, and constant editing from the API’s team. You’ll need to do this in tandem with your fellow developers, as well as the product’s technical writers. Your combined efforts will lead to streamlined documentation—the type that future doc users will appreciate. 4. Establishing a Readable, Navigable Flow for the API Docs Another essential quality your docs need to have is good flow. They should be organized and easy for the doc users to navigate. But often, developers struggle to achieve this optimal flow for their docs. That’s why it’s important to section your docs in a way that’s intuitive to the users. It shouldn’t be hard for them to move from section to section, and to find what they want without reading from top to bottom. Partition the info according to API calls, requests, error messages, and the like. That should help your users in resolving any issues that come up when they’re using your API. 5. Keeping the API Docs Up to Date API design is demanding work. Developers always have to move quickly, and they can implement a lot of changes at any given time. But they should always take the time to put these changes into writing. Every critical update to the API should be easily trackable by the doc’s users. Otherwise, this may affect feature development on future versions, as well as clients’ trust in the API. The solution is to be very conscientious about the API’s updates. Make it second nature to chronicle them in the API docs. 6. Appealing to Would-Be Adopters of the API The last challenge to overcome is tailoring the docs to the target users of the API. Like any marketing tool, the API docs should be more than generic. There should be something in them that calls out to your dream API adopters. There are several ways that you can spruce up your docs for your intended users. You can include sample code from the API that outside developers can try out for themselves. You can link to a support forum that the API client’s IT specialists will find useful. What’s important is to acknowledge these doc users as part of your API’s journey. Master these six challenges in the documentation, and you’ll be regarded as an ace in your API’s development. Here’s to launching superb API documentation along with a topnotch API product.
OPCFW_CODE
// ECMAScript 5 strict mode "use strict"; assert2(cr, "cr namespace not created"); assert2(cr.behaviors, "cr.behaviors not created"); ///////////////////////////////////// // Behavior class cr.behaviors.boundtolayout_plus = function(runtime) { this.runtime = runtime; }; (function () { var behaviorProto = cr.behaviors.boundtolayout_plus.prototype; ///////////////////////////////////// // Behavior type class behaviorProto.Type = function(behavior, objtype) { this.behavior = behavior; this.objtype = objtype; this.runtime = behavior.runtime; }; var behtypeProto = behaviorProto.Type.prototype; behtypeProto.onCreate = function() { }; ///////////////////////////////////// // Behavior instance class behaviorProto.Instance = function(type, inst) { this.type = type; this.behavior = type.behavior; this.inst = inst; // associated object instance to modify this.runtime = type.runtime; this.mode = 0; this.insetTop = 0; this.insetLeft = 0; this.insetBottom = 0; this.insetRight = 0; }; var behinstProto = behaviorProto.Instance.prototype; behinstProto.onCreate = function() { this.mode = this.properties[0]; // 0 = origin, 1 = edge this.insetTop = this.properties[1]; // InsetTop this.insetLeft = this.properties[2]; // InsetLeft this.insetBottom = this.properties[3]; // InsetBottom this.insetRight = this.properties[4]; // InsetRight console.log("bht_bound:" + this.insetTop + "," + this.insetLeft + "," + this.insetBottom + "," + this.insetRight); }; behinstProto.tick = function () { }; behinstProto.tick2 = function () { this.inst.update_bbox(); var bbox = this.inst.bbox; var layout = this.inst.layer.layout; var changed = false; if (this.mode === 0) // origin { if (this.inst.x < 0 + this.insetLeft) { this.inst.x = 0 + this.insetLeft; changed = true; } if (this.inst.y < 0 + this.insetTop) { this.inst.y = 0 + this.insetTop; changed = true; } if (this.inst.x > layout.width - this.insetRight) { this.inst.x = layout.width - this.insetRight; changed = true; } if (this.inst.y > layout.height - this.insetBottom) { this.inst.y = layout.height - this.insetBottom; changed = true; } } // Bound by edge (bounding box) mode else { if (bbox.left < 0 + this.insetLeft) { this.inst.x -= bbox.left - this.insetLeft; changed = true; } if (bbox.top < 0 + this.insetTop) { this.inst.y -= bbox.top - this.insetTop; changed = true; } if (bbox.right > layout.width - this.insetRight) { this.inst.x -= (bbox.right - (layout.width - this.insetRight)); changed = true; } if (bbox.bottom > layout.height - this.insetBottom) { this.inst.y -= (bbox.bottom - (layout.height - this.insetBottom)); changed = true; } } if (changed) this.inst.set_bbox_changed(); }; }());
STACK_EDU
using System.Collections; using System.Collections.Generic; using UnityEngine; /// <summary> /// プレイヤーの座標情報,方向,アニメーションなどのデータを保持する /// ターゲットを壊せ等でゴーストを作るのに使う /// </summary> namespace Actor.Player { public class PlayerDataLogger : MonoBehaviour { // プレイヤーの座標ログ //public List<Vector3> PositionLog { private set; get; } // プレイヤーのアニメーションログ firstにアニメーション名 secondに呼び出しタイプ public List<TadaLib.Pair<string, eAnimType>> AnimLog { private set; get; } // プレイヤーのショットログ public List<eShotType> ShotLog { private set; get; } public void Start() { //PositionLog = new List<Vector3>(); AnimLog = new List<TadaLib.Pair<string, eAnimType>>(); ShotLog = new List<eShotType>(); } //// ログを追加する //public void AddLog(Vector3 pos) //{ // PositionLog.Add(pos); //} // ログを追加する public void AddLog(string animName, eAnimType type) { AnimLog.Add(new TadaLib.Pair<string, eAnimType>(animName, type)); } // ログを追加する public void AddLog(eShotType type) { ShotLog.Add(type); } // ログをリセットする public void Reset() { //PositionLog.Clear(); AnimLog.Clear(); ShotLog.Clear(); } } } // namespace Actor.Player
STACK_EDU
I've been working on these two pieces for awhile, and they've really come together exactly as I was planning. It's quite simple to create a very sophisticated management system for any type of page you want. Just a few clicks, and you can set up truly amazing content management systems The demo here takes you from a blank installation to a full featured blog with far more features than anything else out on the market for concrete5 now. Why pay $40 for something inferior or $20 for something that's 'almost' what you need and complex to set up and use? You could get a system that can be infinitely customized exactly to your needs for only $25! Watch the video now, or continue reading to find out more about this system! The whole thinking behind these two packages is to move a little bit away from in-context editing for everything. It's truly one of the best things about concrete5, the editing interface is amazing. But does it work for everything? Often you want to manage something that doesn't really make sense to do from the front end. Think of things like recipes, employee directories, events, testimonials, or anything else. Often what developers do is create a composer page type for these, and then when you edit them, you are directed to the front end to continue editing. If you want to edit again, the simplest way is to do it like every other page, from the front. You can open them in composer again from the site map or the dashboard page search, but it's not intuitive or documented anywhere. So what can you do? With the Dashboard Page Managers add-on, you can change the flow for creating and managing these simple page types. You don't have to pay a developer thousands of dollars to make you an application that adds, edits, and lists these custom pages. That's what many popular add-ons in the marketplace are based on. It's the work that a lot of shops do for their clients because they don't have the skills to. Often it takes several days to a few weeks to make the custom interfaces to meet people's business needs. That is a LOT of development and a LOT of money. You can avoid all of that! Now, you can just set up a page for composer, then go to your site map, add a couple of pages to the dashboard, and you are DONE. Seriously, there's nothing more to do. You end up with something that looks like this: For blogging, I think it's nearly perfect. But the beauty of it is that it's designed to allow you to manage any type of page that you can edit in composer. You don't need to know any code to create the management interface. Depending on your needs, you might need to do some work on the front end to display those pages the way you want, but that basically takes it into the realm of designers. I think that's going to be a huge thing for a lot of shops. And for a lot of individuals and small businesses who are making their own sites without assistance. It extends what you can do, while at the same time simplifying things. Which is what I think is the main goal in creating any system. Another thing that I find a bit bittersweet about concrete5 is that almost everything on the front end is controlled with blocks. That's not in and of itself a bad thing, but it's not ideal, either. But it is limiting. If you want to add a bit of content from flickr, you need to find one of seven different packages that might do what you want. Youtube? There are dozens. Medial players that do one system, ones that do dozens. The same with audio. Even if you find a block that's exactly right for your needs, you're probably going to have to adjust it to work with your system. And it won't fit in well with a composer type of work flow. Several of these blocks in the marketplace are not easily integrated into composer for one. For another, you can't really set composer up to handle every content situation. What if you want something like "content block" "youtube video" "another content block" "vimeo block" ? You could maybe set up a composer page with four blocks, but then when you decide that vimeo and youtube need to be switched on another page, you're kind of screwed. Is there a better way to do things? When I was working on Grease Rag, I found out that there was a system called Oembed that replaced URLs in Wordpress sites. I kind of learned that when I started doing the import. Suddenly there were all these pages with just a blank URL in them, and I had no idea what was going on. Eventually I figured it out, and had the basics of a system that allowed you to do the same thing with content blocks. It was another year at least before I had time to sit down and really, really get it ready to release to the marketplace. I didn't want to just have a custom template for stuff. I wanted it to be a very robust system, not just a little hack. Fully responsive, extensible for other developers, integrated light box, ability to extend it... All of that kind of 'candy' on top of the basic functionality. It seems like it's turned out quite well. When paired with the Dashboard Page Managers, you can start making pages that don't require front end editing at all. The whole process of adding a lot of blocks can be tedious, especially if you are first editing it in composer, then have to work from the front end once it has multiple blocks. So many more things are possible with it. With either or both of these packages, you can do some pretty amazing things that were always quite difficult in concrete5 before. They will save a ton of time and money for a lot of developers. So many sites can benefit from what I've built here. I hope that people enjoy them.
OPCFW_CODE
Maybe its because it is late on Friday, and I will admit never having seen the Stacy Matrix before (looks like its Cynefin which means we are immediately into culture wars and cliques) so maybe I’m missing something here, but the idea that you decide on method based on these criteria looks like non-sense to me. I usually tend towards Kanban for teams with lots of BAU - unplanned but urgent work - and Scrum style working for teams with more control over inflow, but then, I usually tend towards my own Xanpan method. I recall an airline team I worked with 10 years back were we adopted Kanban not because the problem space was complicated or because the requirements were unknown but because the airline had outsourced everything which moved and everyone blamed everyone else. Until we mapped out the workflow and ran it we couldn’t start to debug the processes. As to “not knowing what is needed” - almost everyone I meet believe they know what is needed and challenging that belief requires subtely. The choice between Scrum and Kanban has nothing to do with different levels of complexity. In short: Scrum optimizes throughput, while Kanban optimizes reaction time. You should choose Scrum, if the backlog items are mostly known before each sprint and sudden tasks with overwriting priority leading to the cancellation (or re-planning) of a sprint are a rare exception. In contrast you should choose Kanban, if new tasks often have to be worked on before the next sprint. This choice may even not be constant for a team: you can for example run Scrum during the midst of the development and switch to Kanban once the system is in beta tests with the customer. Similarly, even within the same product development, teams working on inner core parts may use Scrum while teams working on customer individual parts may use Kanban. (There should be a transparent and clear decision process on such mixed setups, though.) (Maybe this is what Allan wrote, just in different words). I hope no one here gets the idea of starting a culture war over Cynefin vs. Stacey Matrix. From my point of view, it does not make sense to classify Kanban and Scrum in this matrix as it was done in the illustrations. Why I referred to these graphics is because they obviously reflect the perception of many users, but also consultants, that Kanban is not suitable for complexity. But I also must admit that the supposed Kanban systems are mostly just a status board. Many teams (especially outside of software development) make the work with these boards transparent, but for it to become Kanban, I think they should at least have work-in-progress limits. Something that already seems too complicated and needless to many users. Oliver, could you elaborate a bit more on your opening statement? “Scrum optimizes throughput, while Kanban optimizes reaction time.” I agree that Kanban can have a shorter reaction time, especially for unforeseen events, than Scrum. What I don’t quite understand is why Kanban should be more optimized for this than for throughput, and even more so why it is less optimized for throughput than Scrum. My understanding is that Kanban has elements of throughput optimization like the WIP limit already built in. Of course, it is also possible to use them in Scrum, but it is not a mandatory element of Scrum. Is it really throughput that Scrum is optimized for? I would like to understand the reasoning behind it better. Fast reaction time requires changes and changes in turn increase effort (discussions, re-planning, unexpected dependencies, sometimes adapting your programming environment etc.) - what in manufacturing is called set up or changeover cost. This is the reason, why Scrum fixes the sprint content after planning. It makes it easier for team members to optimize their work and thus increases efficiency. The fixed sprint cadence in Scrum simplifies planning, i.e. when people or multiple teams depend on each others work. (Some Kanban teams do iterations as well, but this is then already a step towards Scrum.) You can also see it in the key metrics of the two processes (again at least according to the plain vanilla methods): velocity versus cycle time and lead time. The first is a metric for throughput, both latter for reaction time. This reflects where each puts its focus on. With all those technological advancements i.e. Cloud native and end2end responsibility within product teams, I wonder if the Scrum Guide shouldn’t promote very short product cycles more clearly. progress toward a Product Goal at least every calendar month Not sure why they left the monthly progress towards a Product Goal in the “The Sprint” chapter. If I add up your arguments on top of my hypothesis “Scrum is not needed anymore in a Cloud native world”, this would be an advancement for all those who want to stick with Scrum but want to leverage the benefits from Cloud native and other technological advancements which speed up change rate and deployment frequency. Too many organisations and teams are left with “standard implementations” with 14-day sprints. Even the Stacey Model is easier to understand than the Cynefin model it provokes too much misinterpretations. Also the Inventor / Author of the Stacey-Model is saying don’t use it anymore. I need to confess that I also suggest to try out Scrum and XP in the complex domain instead just using Kanban for several reasons. But I always also suggest to use it together - not one or the other. When things become more unpredictable, where more is unknown than known - I feel : just using Kanban isn’t enough. For emergent work I think these agile methods put something new to the table and that addresses better the complex domain. In software development for instance we don’t produce so much non-variable parts (Gleichteile, see a classical production line), but unique parts, that we try to re-use (like lego blocks). Looking for bottlenecks, optimizing for cycle-time, using WIP-Limits, etc. isn’t enough here. Finally we shouldn’t forget about trying to move from the complex to the complicated and finally to the obvious/clear/simple domain over time. A cloud native environment obsoletes several types of efforts, which are required for deploying to classical servers. A major difference may be, if your users are “only” humans or (how much) other software depends on you (e.g., are you implementing a B2C page or Amazons IAM service?). Depending on the type of software and customers, a release still can have significant constant effort. There may also be effort for the customer implied. It can also depend on whether you define a sprint by a potentially releasable increment or just an intermediate development version. In most larger development organizations, it is probably in-between: CI within a team, the result of a sprint can be reasonably used by other development teams, and releases towards customers (except patches) are a subset of that. (All this of course again, only where the focus is work efficiency, not reaction time - otherwise Kanban. In a mixed environment, the Kanban teams may not profit from fixed iterations themselves, but only do it for synchronizing with other teams.) A further question may also be not development related though, but how often you want to do cost/benefit discussions to decide priorities and the work content for the next weeks. Regarding the Scrum guide, my interpretation is, that they do not want to recommend any direction and instead leave it to your own responsibility. If I remember earlier versions right, the current phrasing became more relaxed compared to “2 or 4 weeks”. In the same paragraph, it actually says: “Shorter Sprints can be employed to generate more learning cycles and limit risk of cost and effort to a smaller time frame.”
OPCFW_CODE
Credential Registry Overview The Credential Registry is a cloud-based data store for linked open data resources published using CTDL JSON-LD. The Credential Registry holds detailed information on all types of credentials and skills in an easily-accessible format. Users can explore competencies, learning and employment outcomes, up-to-date market values, and career pathways and reference data on credential attainment and quality assurance at schools, professional associations, certification organizations, the military, and more. This data is a dependable and powerful source for systems, web and mobile applications, and other tools. All data in the Credential Registry is published by approved accounts, follows the Credential Registry’s Minimum Data Policy, and leverages the CTDL family of schemas. When credential information is published to the Credential Registry, the CTDL links each data point (e.g. aligning credentials and skills), making it possible to compare that credential’s data across all other credentials in the Registry. Together, the CTDL and the Credential Registry make credential and skill information accessible, discoverable, comparable, and actionable. The Credential Registry is intended to power a broad array of systems, applications, and tools. The scope of the CTDL schema extends beyond the Registry, but the Registry supports publishing the majority of CTDL terms. Its architecture also requires globally unique identifiers, known as CTIDs, with primary CTDL classes. These identifiers are important because they enable not just identifying resources, but also linking directly to them in the Registry. This is a broad overview of the Credential Registry, for detailed information, visit our technical site https://credreg.net CTDL – Our Common Language The Credential Transparency Description Language (CTDL) is what ensures that credential information is standardized and easy to understand. CTDL is based on linked data principles that connect them to the information learners and employers want to know most – such as outcomes, skills, and career pathways. Credential Information in Action Our Credential Finder demo application shows current credential data at work. It pulls information from the Credential Registry and allows users to find and compare credentials. Credential Registry Handbook Our Credential Registry Handbook contains technical information for publishing and consuming data, as well as our APIs. Publishing Data to the Credential Registry There are three publishing system tool options available to organizations: - Manual Entry – recommended for small quantities of data - CSV Bulk Upload – recommended when it’s not possible to directly utilize the Publishing Assistant API - Credential Registry Publishing Assistant API – for mapping and publishing structured data - Visit the APIs page (on this site) - Visit the Why Publish to the Registry page (on this site) Consuming Data from the Registry There are multiple options for integrating data from the Credential Registry with systems and software applications. These options include directly searching the Credential Registry, or downloading data for offline use. All data published to the Registry is available for consuming as CTDL using JSON-LD. Data downloaded for offline use can also be used in a document store or graph database, or mapped back to a traditional relational database structure, such as SQL. Get in Touch Our team of experts is ready to help you embark your credential transparency journey. Whether you have questions about our technologies, services, or don’t know how to get started, we’re here to assist.
OPCFW_CODE
Print Multiple Templates Automatically I'd like to be able to print, two separate labels at a single time. Currently I have a form that lets you pick database records, then print 'Label A'. Then you load a new form and select the record(s) to print and print 'Label B'. How do I create an action or process by which we select the records to print, then it automatically prints both Labels ? Are the two labels printed on the same printer and are they the same size?0 Different printer and different size. The data needs to be the same from the database. The user selects the record at time of print. I'd like that record to print on both these labels/paper. One is a sticky label, the other is an 8.5x11 to the office printer.0 Hi, I have recently done this myself for a customer using process builder, as a start point I followed the introduction to process builder on the below, the second section from about 24 minutes in covers printing 2 labels from input. That is a more problematic and will require a bit more configuration. It will also require the use of the Integration Builder application that needs either an Automation or Enterprise licence in order to run. If there is a field on the label that already prints the database field that is used to perform the database lookup/filter then skip point 1 and start as 2. You will also need to create a folder where a file will be created at print time and also save a blank text document into the folder. I used Notepad to do this and called the file "workit.txt", but you can use whatever name seems appropriate to you. - Add a text field that is populated by the database field that contains the lookup/filter value. Place/drag this field to the side of the template itself so that it is outside of the printed area. - In the Properties and Data Sources tab of this field use the Change Data Source Name button (to the right of the Name box) and give this field a suitable name (I called mine TheChoice as per the first screenshot below). Then exit this field. - As I often do for databased labels, I added a text input box onto the Data Entry Form and linked this input to the query prompt/filter set in the Database Configuration setup and also added a Preview of Template image too that will be populated with the live product data after the users have entered the lookup/filter value. You could add extra information/inputs on here too if you like such as a number box for the number of labels etc. - Back in the Template view, click on the File menu and the BarTender Document Options and then Actions tab. Tick the enable box and press the Document Actions button. - On the new screen click on the blue plus sign next to Data Entry Form Completed and then from the File option choose the Write to File action and configure it similar to as shown in the 2nd screenshot. - Click OK and Ok to come out of those fields and save the label, - Create your second label as required and link this to the database and in the filter section create a new database query prompt but call this the same name as you used in 2 above (3rd image). - Save and then close the label. - Open Integration Builder and start a new File Integration that looks for the folder you created at the start and the text you saved there too. All the other settings can stay the same other than changing the the After Detection option to Delete File - Use the blue plus arrow next to Actions to add a Database>Transform to Record set action. Link this action to the text file you created and amend the other settings as shown in the image below. Drag this action above the Print Document settings. I received a warning about field names at this point but just clicked Ok to continue. - Again from the Database action list add a For Each Database Record action. No configuration of this is required, other than dragging this above the Print Document option. - Either delete the Print Document option and add a new Print Document option from the blue plus next to the For Each Database Record and the Print options or else drag the existing Print Document onto and right a bit of the For Each Database Record to make this a daughter process (please not the indentation shown in the image below) - On the Print Document>Document screen adjust the settings to search the Computer/Network and browse to find your second label making sure to Import the Document settings which may take a few seconds to complete. - You can adjust the basic printing settings from the Print Options screen as required, but next select the Query Prompts tab. This should display "TheChoice" or whatever name you chose in 2. above. Click into the box to the right of this and use the Insert Variable option and choose the CurrentRecord option which maybe on the Actions option I believe. - That should now have completed the config, but I would suggest using the Test option from the top menu to make sure the system works before saving the integration file and then deploying it as live on your system. I hope this helps.0 @Martin - I hadn't thought about Process Builder for this but I guess that is workable too. I was working on a similar issue for a customer when this forum post was raised and so was killing two birds with one stone as it were. For my Customer's system the label printing is processed via Print Portal and not sure if you can run Process Builder routines via this route.0 Please sign in to leave a comment.
OPCFW_CODE
Thanks Bofferbrauer. I found out the Iron plates will be used in the metalworks to make a cauldron if i am not mistaken. The anvil needed in the metalworks was a starting tool from one of the dwarfs. I almost played a new full first year and this are my findings: - When you make an order to build something and you cancel it because it is the wrong thing, wrong type of stone or something the cancellation is not correct. It seems as the building is still on the stack/queue so the parts will still be delivered. If you build one thing and there are resources in place and then cancel it and build the correct thing the new resources arrive before removing the old one's. I got a deadlock with that, the new parts where delivered on the barrel rings see below. Because the spot was not free the new boulder (limestone in this case) could not be dropped and returns to the zone, then de dwarfs picks up the next boulders and repeats the process.. I had to cancel the second building and wait to get all resources returned to the zone before building the furniture again. After seen the problem, it worked: In this screenshot you can see the same problem with a annoying effect, I order a table with chairs and some pillars without selecting the stone type. Because it is in my shale area I like to have the furniture from dolostone. So I cancel and order it correct on the same spot. And now look to the backlog of assigned shale blocks.. They are assigned to the dolostone items.. So the dwarfs pick them up, bring them to the spot and then bring them back, after that is done they try to fetch the correct blocks.. - One more assigned bug, I made a new resource zone close to the mountain I started making rooms. But then I didn't want them because I wanted the boulders close to the masonry workshop. So I removed the resource zone. As you can see in the next image the boulder assigned to the removed zone will not get a new assignment, event after reboots and days of game time. A rock with gems mined later is handled correctly so I can go on :) - Chairs seems always before the dwarfs, I thing it is niceer to do it on the chair position, so the upper one is wrong the lower one is correct: - My tomato's where harvest in 2d day of winter, or not. Because it didn't work or they are lost, they are gone, I didn't get the tomato's or seeds.. To bad.. - And now some good news, I make the Metalworks work and have created some iron ingots :) This is it for now. I will start to play the second year and will see what that brings, with more dwarfs and so more work done :)
OPCFW_CODE
Apr 1st 2012 9:45PM @Brett Porter: I understood her to mean that she asked "When you find out, will you let us know?", and they gave an evasive answer - not about whether they already knew, but about whether they would let us know once they did find out. Here, they did find out about it, and they did let people know - so now she's asking "Why couldn't you have just said 'yes' when I asked before?". Or something like that. Mar 30th 2012 12:57PM Thank you! I was wondering if anyone would mention that; this is one of my minor pet peeves. Mar 28th 2012 12:39PM @Nopunin10did: There's no way your name is accurate in this case; that pun *has* to have been intentional. Mar 21st 2012 7:17PM @Stilhelm, if rather belatedly: It seems fairly plain to me that the discussion isn't about being competitive, it's about fair vs. unfair advantages. "Being more skilled" is a fair advantage. It's hard to dispute that, and I don't know offhand of anyone who would try. The argument being made is that "having better gear" is an unfair advantage, and as such, is detrimental to the playing experience of anyone who doesn't have that advantage. That isn't PvP-specific; it would hold true in PvE, as well. It's just that in PvE, people aren't competing as directly against one another, and it's possible for one person to succeed without requiring that another one fail. Mar 17th 2012 3:53PM I'm one of the people who's posted about that. The "solution" I've come up with is to look at the class-appropriate rewards from every quest using MogIt, to identify the model/color combinations which are available only from non-repeatable sources, such as quests. Then I dedicate most of my bank (it may end up being all of it, eventually) and all of my void storage to just those few items, and sell the rest. I've got something on the order of 140-150 slots taken up by such items at the moment, and I still have half of the Wrath quests, all of the post-85 quests, all of the BC dungeon and raid quests, and most of whatever vanilla raid quests may still exist left to go. Plus, before hitting on this approach, I sold off quite a few "less interesting" or "less likely to be visible" items (e.g. bracers)... so the true numbers may be higher even than that. I strongly suspect that I'll find that there simply is not enough space in one character's inventory, bank and Void Storage for even just these "effectively unique" items, even with a full set of max-size bags. Even if there is, it's not reasonable to expect people to dedicate their entire storage space to simply *keep their options open* for transmogrification. Some better solution for quest-only items is needed. I've been contemplating various options to try to find one which seems worth suggesting to Blizzard. So far, my best idea is a "trade-in NPC" who will let you select any equippable quest-reward item for any quest you've already completed, and will scan through your entire inventory and delete any other equippable items rewarded by that quest. Essentially, this would let you "change your mind" about which quest reward you wanted from any given quest, any time you wanted to go to the trouble. Mar 17th 2012 3:20PM @Kelly: Careful there. According to my understanding, automating things using outside-of-WoW macros like that counts as botting, and can get you banned. The limitation is "one hardware event per action", and a click counts as an action. If you trigger the macro by one physical click or keypress, and it generates more clicks or keypresses automatically without your intervention, that's against the rules. I'm not a definitive authority on this, of course, and it's always possible that there's been an official "yes, this is OK" clarification that I've missed. But last I knew, the stated rules were such that what you've described doing was a bannable offense. Mar 14th 2012 12:34PM This is somewhat belated, but I wanted to register my reaction anyway. When I first saw the headline for this article, my immediate reaction was "Hey, great; this is just what I've been looking for." The reason is that we know that some zones are going to change in Mists, and I want to make a point of completing them first, so that I get to experience the current version before the Mists revamp. I can guess at some of them (Dustwallow Marsh being the most obvious), but there are probably other likely changes which I wouldn't necessarily notice, and I'd like to see discussion of that and get advice on which other zones might be "going away" in their current form. The headline of this article describes it as a list of Horde-side zones to complete "before Mists". The only reason to specifically complete a zone before Mists is if you expect it to be on the list of zones that get changed enough in Mists that the current experience won't be available. I want to complete all such zones before the change, and then do them again afterwards (probably on a new alt), so I end up with both sides of the experience. You can probably imagine my disappointment, therefore, to find that the article is simply another list of "zones which are good enough that everyone should play through them", with no reference to likely Mist changes at all. It's certainly worth having an article for that, but it's not at all what the headline led me to expect. Feb 15th 2012 11:49AM But it does mean he's opened up the potential floodgates for *all* of the other three quadrants, including the caliginous and ashen - which goes well beyond the "companionship, friendship, or bromance" categories he referred to, and may have been more than he was intending. Though there are probably relationships in all four quadrants (and in none) which are interesting enough to be worth singling out for someone... Feb 10th 2012 6:05PM (I'd give you a thumbs-up for that, but those never seem to work for me for some reason. I've probably got some needed scripts blocked.) Actually, I think we can probably both agree that griefing is never acceptable; we probably just disagree on what constitutes griefing. By my definition, the defining element of griefing is "preventing other people from enjoying the game" - and, specifically, doing that intentionally. Corpse-camping almost always qualifies; attacking a low-level player (low enough to stand zero chance against you) who isn't engaging in hostile behavior does as well. The fact that it' s on a PvP server affects things somewhat, in that arguably people who enjoy the aggressive and potentially hostile environment of a PvP realm might actually still enjoy the game even when their available gameplay consists of being corpse-camped or high-vs.-low one-shotted. However, I think that possibility is sufficiently unlikely - even for people who are on a PvP realm because they want to PvP, instead of being there e.g. because their friends are there)- that it doesn't affect the conclusion. You're almost certainly right that a player-council system wouldn't really work well, though; I thought that had been argued out well enough in these comments that I couldn't really add anything constructive to it. I think it *could* in theory work, but in practice, the "idiots and assholes" of the population (and almost everyone is one or the other from time to time) would gum up the gears to the point where it would wind up being a problem instead of a solution. Feb 10th 2012 9:39AM Well, it wasn't exactly removed. The actual place still exists - building, dock, goblins and all. If I remember correctly, though, it's not labeled with a special "location" tag anymore - and according to my understanding, it's no longer used as the "initial spawn point for new characters" which originally gave it its name.
OPCFW_CODE
What's life without a little pepper and spice? Sometimes there are things that need to be said, and I have no problem saying them... There's a thought in neuroscience/psychological circles that words are much more than sounds that represent things: they are the abstraction of our higher brain function. Words are language, code is language. Restricting yourself to one or two languages is limiting your cognitive abilities Get up from your desk or chair or floor and go for a walk. Right now - I challenge you to do this. If you can walk through a crowded place - that's even better. Go by yourself, and soak it in... all of it... If you've ever sent a support email to Tekpub, you know I'm in the habit of asking questions. I think truly serving the Customer sometimes means asking questions, and sometimes even saying "No"... Some people have a code editor they use all their life. Others, like me, jump around a bit depending on the need. I thought I'd share with you what I've found out, as I get asked about this a lot. Once the script outline is set and you have a skeleton of the words and tone you want to use - it's time to bake the demos. Yes: Bake. It was exactly 1:32pm, HST, when the motor died. I stared at the throttle... hoping it was a joke. Land was 50 miles away, and the sea was building, and we were drifting. I thought: "This time dude... this time you really fucked up". The tech industry, like many, is rife with sexual discrimination and muted policies towards equality in the workplace. I used to let it ride. No longer - this is my story. One of the tomes we live by: "Global Variables are EVIL!!!!!!" - so we abstract our stuff into patterns and build up highly ceremonial and ornate bits of dramaware called "IoC Containers". For what? To use Global Variables - That's Why. I made the mistake of publicly commenting on someone's idea of a RESTful API. And already - I've probably lost you. I don't know any single term more explosive and zeal-inducing than REST and "what it means to be RESTful". Oh - you say "it's quite simple?" You say "what's so hard?" Pedanticize away my pedantic pedant... ... In which we reflect on my ego-mania and just how in the dark Enterprise Devs using OSS really are... Some interesting posts flying around about how ActiveRecord is rotting people's brains and how Rails is "pants on head retarded". I figured I might as well respond. I'm not opposed to swearing in presentations, or anywhere for that matter. I don't cringe when I read F-bombs nor do I care if you have the word "Fuck" embroidered on your Calvins. Swearing says more about your abilities as a speaker then it does your content... that's the problem. The startup world has an exaggerated sense of competition - almost as if each "player" is a puppy struggling for access to the funding teet. This serves the VC puppet masters just fine, but can ultimately destroy the very business you're trying to start. Go ahead and write this off as a Fanboy post - just read this one point: when I bought a Mac as my primary dev machine, my work life became a whole lot easier. I know Macs don't resonate with a lot of people - and that's fine. I find it to be a highly versatile bit of hardware. Magic Strings - they're bad right? What are these repulsive warts on good design? And why do they want to melt my code? The fear of strings drives otherwise talented and wise developers to do some extraordinarily ridiculous things... We've been running for a little over a year and a half and I'm happy to say that we're doing really well as far as startups and small business go. It's my goal to be as transparent and communicative as I can be - so if you're a Tekpubber (or are thinking about joining) - here's what's going on. TL;DR: I turned my Twitter account back on because as much as I like the silence and increase in my efficiency - Twitter helps me in a lot of ways I'm beginning to miss. Go ahead and crow. You were right. In honor of pushing our newest series Mastering C# 4 with Jon Skeet, Tekpub is opening its doors once again for 24 hours for all people to come and check out the groovy content we have. Having a lot of fun with this little tool - and more great comments are coming in. I've added some good stuff in the last few days - like Paging and streamed results. I've been having a lot of fun with Massive and people are really giving me a lot to think about - and change/improve! I'm about to push an update today that will break stuff but that's OK, it's still newish. I read Scott's post today about Interview questions and it made me chuckle a bit. They're great questions - no mistake about it - but you could almost (just barely) hear an audible set of mouse-clicks as managers around the world copy/pasted those questions into their "What To Ask Developers" Word Doc. I'm not sure the problem is the developer... In a previous post I showed some fun stuff with System.Dynamic and Data Access. I'm happy to say that I tweaked it, loved it, and pushed it to Github if you want to diddle with it. This post is a tad long and dives into Dynamics at the end - read it if you want a fun mental exercise. Otherwise the code is upfront. I don't normally use my blog to pimp Tekpub, but this is just too good. I've been holding my breath while we get this production together - and all the pieces recently fell into place. I'm giddy like a goose. I promised myself I'd never do this again: create an ORM-y/Data Tool for .NET. But I needed some utilities for some work I'm doing, and I extracted the databits because I can't help myself. I like to share - mom taught me right. You would think that someone would have tried this before - but I haven't seen anyone blog on it yet. I'm sure I'm not the only knuckle-dragging mouth-breather who eschews High Concept for Dumb-simple solutions when available. Today I think I might have broken my own record for ugly: I deployed a site using Git and Dropbox. And I love it. I read K. Scott Allen's post today on The Great Rewrite and how it's "wankery" - the wrong answer to solving a business problem. I see his point, but I disagree. I don't think you can plan this stuff. You can try - but it will end up late and uninspiring. I was incredibly skeptical when I heard about WebMatrix. I was dismissive and snotty about the WebMatrix data access story. I called the WebMatrix IDE a "MySpace Code Editor". I was wrong. They got it right, and I'm really impressed. Haakon Langaas Lageng asked me the other day "How do you make your videos?" His question was less technical, more procedural. I answered him and thought that I would share this with you. You might be thinking "why would I do such a thing?" - the answer is that a well-made screencast saves everybody time and is 10 times as effective as a book. Many have noticed that I've shut down my Twitter account. No, I wasn't suspended and no, I'm not having a mid-life crisis meltdown. I finally "got smarts" and did the math: Twitter costs me a lot more than I get from it. I've always been a major proponent of Open ID. I love the idea and the intention - it's a great solution to a long-standing problem and solves a lot of issues for developers. Unfortunately it creates a ton more for business owners. When I was at Microsoft I had an idea that I thought would help Open Source projects: prodding employees to ask for a percentage of their time (aka "commitments") to put towards an Open Source project. There were some issues to work out (mostly legal) - but I found an ally in DPE and it almost took off. Unfortunately I left (and so did he) and the idea died. But I think it's a good one - and it doesn't have to belong to Microsoft alone. One thing that you can count on if you read anything online - you will be insulted at least once or twice a day. Or hour. If you have a blog or Twitter account - it's likely going to be more than that. Sometimes I laugh at it. Sometimes I introspect a bit. Sometimes I put on my headphones and shut off the stream. And then sometimes I write a post. I don't know how I got on this weird tangent - but I'll warn you now: it's weird. It has something to do with Gary Bernhardt, my brother, and Vim - but I can tell you this much: I'm a changed guy and I'm kind of hooked on Vim.
OPCFW_CODE
Programming, coding, software development or engineering, whatever you call it, it’s the practice of writing code to deliver a set of instructions to make something run, whether that be a website, an app, a discord bot, a real robot, or even your Roblox game. But with so many different development fields and programming languages out there, it can be a bit daunting to understand where to start. That’s what I’m here for today. If you’ve taken computer science at school (or are currently doing it), that does help but if you haven’t, that’s ok. Programming is such a field where you can go from beginner to pro completely on your own without any formal qualifications. Pro tip: formal qualifications matter very little in the world of programming So I’m gonna take you through a few of the different development fields so you can find out what you might be interested in, after which we’ll talk about how to get started, where to start learning, and how to progress through the start of your programming journey. Common Programming Fields Mobile Development – the development of mobile apps for either Android or iOS. You would normally specialise in programming for either one, but hybrid mobile development exists too. For Android, the language would be Kotlin, and for iOS, it’s Swift. Game Development – the development of games. It’s common to use a game engine like Unity or Unreal as the foundation to creating your games. Unity is the easier one of the two and uses C#, while Unreal makes use of C++. Desktop Programming – the development of desktop applications you’d see on your pc. Again, there’s a wide array of languages that can be used for this but the most common are C# and C++. Data Science – the use of programming to analyse large sets of information in order to help businesses make decisions and solve problems. Common languages for this are Python, Java, and R. Robotics – the creation of real-life physical robots and the programming needed to control them. The programming needed for robots tends to be harder to perform than other programming fields as it uses low-level languages like C and C++. AI / Machine Learning – the development of code that can build improvements to itself with prospects of replicating human intelligence (although that goal is still somewhere in the future). The leading programming language for this is Python. It’s not even nearly limited to these sectors above either. Programming can also be used to do cloud infrastructure, discord bots, graphics rendering, and so much more. How to get started? After you decided where your interest lies, there are a number of ways to get started as a programmer. School – one of the most common ways people get into programming is by taking Computer Science at school and college. While this is a great way to get a broad knowledge of the world of programming and tech, it’s best when supplemented with some self-learning on any of the ways below. Online Courses – sites like Udemy, Udacity, and Coursera offer online courses (free and paid) that teach programming at any level, beginner to advanced. Some of these even offer certificates and degrees that could increase your prospects with future employers, though they don’t carry the same weight as a standard university degree. YouTube – plenty of YouTubers put up their own courses that teach programming at a beginner level such as freeCodeCamp and Programming with Mosh. This is a great way to learn programming if you don’t want to pay for anything. Official Websites – The official websites of each programming language and technology may also offer free courses on how to get started with them, though the quality of them may vary, and they tend to be more on the text side instead of the video side. As an example, look at this one from Android. How to learn quickly? One of the quickest ways to learn programming at a competent level is to learn the basics of your programming language and technology using one of the methods above, then start creating your own project. I cannot stress how powerful this is for learning. When you start your own project, you’ll figure out the entire process on how to code a complete product, be it a website, a command line tool, a bot, a mobile app, literally anything. There will be plenty that you still don’t know. Coding your project will reveal these to you and encourage you to look up those problems on StackOverflow or other coding blogs. By the time you finish your project, you will be a million times more skilfully equipped as a programmer than you were when you started. To Go University or Not To Go? This is a huge debate in the world of programming, as it’s a career where you can easily get by without going to university, and some argue that it could be even more beneficial to not go to university for computer science and instead, focus your time and energy on other things. There’s arguments to both sides: Going to University - Broad understanding of low-high level programming - Strong knowledge of theory and algorithms - University Degree Not Going to University - Strong niche understanding of chosen technology - Quicker potential progression with learning - No payment / debt - More freedom on what to learn At the end of the day, there’s no definite answer that suits everyone. You have to evaluate on an individual basis what you feel would work best for you. Now you know of the different fields of programming, how to get started with them, how to progress quickly with your learning, and whether or not you should go to university for it. If you still have any questions, reach out to me on my Instagram. It try to answer every question I get. And if you want to enjoy some of the memes that come with programming, check out my TikTok. I hope you have an amazing start to your programming journey. Happy coding ༼ つ ◕_◕ ༽つ
OPCFW_CODE
Add support for Bridging header to Flutter Plugin I was making a Flutter plugin and realised that there is no support for a bridging header to call Obj-C code in Swift for a Flutter plugin. Please add this as a feature because it makes it impossible for developers who know Swift but not Obj-C to make iOS Flutter plugins. Thanks! Hi @arnavvaryani Please checkout this possible solution Thank you Hi, I have tried this but it doesn't solve my problem, and it doesn't work for all CocoaPod dependencies, its a workaround but not a solution This isn't a Flutter issue. Include of non-modular header inside framework module ' is because https://github.com/TapResearch/TapResearch-iOS-SDK/tree/master/TapResearchSDK.framework/Versions/A doesn't support clang modules, which is required to import from Swift. You can create your own Objective-C plugin bridging headers and set them up in your podspec. @jmagman The plugin doesn't contain a Bridging header unlike the ios folder a Flutter app. The error comes not only for TapResearch but for other obj-c files I try to import in the .h file of the plugin as well. As I mentioned earlier this was my work-around due to the missing Runner-Bridging-Header.h file of the Flutter plugin template @arnavvaryani You can add any files you want to your plugin, including a bridging header. You don't need to just use the default plugin template. @jmagman Could you suggest how do I add a modulemap for only an Obj-C supported dependency? @arnavvaryani The dependency itself would need to add the support, so I gather in your case that would be a job for the TapResearchSDK maintainers to become good framework citizens and support Swift. Since they don't support it, you need to add the bridging header yourself and add it to your CocoaPods podspec. But Flutter shouldn't need to automatically add a bridging header for Swift plugins, it's up to the plugin authors to decide they need it. So if TapResearch adds a modulemap it will get recognised in the flutter plugin .h file itself? Oh, you're trying to import it from your .h, not your Swift file? That should just work without a bridging header. I just tried it and it builds fine: s.dependency 'TapResearch', '2.0.11' #import <Flutter/Flutter.h> #import <TapResearchSDK/TapResearchSDK.h> @interface TestPlugin : NSObject<FlutterPlugin> @property TRReward *reward; @end To import it to your Swift file either TapResearch needs to add a modulemap, or you need to add a bridging header. Could you show an example as to how to add a bridging header in a Flutter plugin to call an Obj-C header file in Swift code? @arnavvaryani I think that's a question for Stack Overflow or another help group. It's a CocoaPods question, not specific to Flutter. So this question was asked on Stack before - https://stackoverflow.com/questions/52932436/how-to-add-bridging-header-to-flutter-plugin, but no one seems to have a solution for it. If possible could you give some clarity on this answer? I'll reopen this as a documentation issue. @arnavvaryani The link you added already has the answer https://stackoverflow.com/a/60262749. Look for CocoaPods answers, not specific to Flutter plugins. Hi @jmagman, so I tried defining the modulemaps myself and it worked perfectly, sorry for all the trouble! Thanks a lot for your help.
GITHUB_ARCHIVE
The core of the Kodu project is the programming user interface. Up to now, it was almost impossible to create a video game if we didn't have any programming knowledge, but now, thanks to Kodu, anyone can develop their own adventure in a matter of minutes, as long as they have some originality. The language is simple and entirely icon-based. One-line summary: 10 characters minimum Count: 0 of 55 characters 3. This can, obviously, build into something quite complex, but it's reasonably easy to set up - most of the time you're just choosing icons, there's no text editing involved. Anyone can use Kodu to make a game, young children as well as adults with no design or programming skills. The download was scanned for viruses by our system. Click the Download button on this page to start the download. The application within their power does not differ anything from other professionals, paid solutions available on market. All you really need is patience and a little imagination. There is very little need for a tutorial here, everything is pretty self explanatory. It also takes up a lot of space on my hard drive. It is designed to be accessible for children and enjoyable for anyone. Aimed at children, although accessible to anybody, Kodu offers a high-level language that incorporates real-world primitives: collision, color, and vision. We also recommend you to check the files before installation. Kodu latest version: Game programming environment for kids. What's New in Version 2. It is designed to be accessible for children and enjoyable for anyone. Kodu provides an end-to-end creative environment for designing, building, and playing your own new games. Kodu is a visual programming language made specifically for creating games. Cons Can be difficult to figure some of the training exercises out, haven't done them all yet, and still very new to the program. Configuring Kodu Game Lab for best performance. Click here to get file. This file was last analysed by Free Download Manager Lib 103 days ago. There are no boring text-based lists here, either. Advanced SystemCare - Maintain and optimize your computer - Download Vid. A terrain editor brings point-and-click simplicity to the task of creating your game world. And if you have any problems creating games of your own, then don't worry, you can always load a sample game instead. Kodu Game Lab Tutorial - How to Make a Game. Microsoft Kodu Game Lab is a new visual programming language made specifically for creating games. Aimed at children, although accessible to anybody. Once your account is created, you'll be logged-in to this account. . Kodu can be used to teach creativity, problem. You do not need to download both files. Summary: optional Count: 0 of 1,500 characters The posting of advertisements, profanity, or personal attacks is prohibited. The Kodu language is designed specifically for game development and provides specialized primitives derived from gaming scenarios. This development environment, conceived so anyone can use it, is very intuitive and clearly puts our own imagination as the limit to what we can do, allowing us to create very simple titles in a matter of minutes or real masterworks by spending the time necessary. If you are looking for Alternative this will be hard because Microsoft product is a learning environment for kids. When you bump into an object, lose some health maybe; when the character sees an object, move towards it; when the player hits a particular key then perform some action; when the player moves the mouse then move the character accordingly. The possibilities of this app can be seen in action by playing games like Call of Duty: Strike Team, Bad Piggies, Deus Ex: The Fall, Dead Trigger 2, République, Wasteland 2 or Max: The Curse of Brotherhood. You can talk one-to-one or in group chats, and because you're always logged in there's no way to miss messages. CryEngine is the free game engine that gives you a combination of feature-rich development technology to create single and multiplayer games. Do one of the following: To start the installation immediately, click Run. Kodu Game Lab Alternative software from Amazon. This video game editor has been created by Microsoft so that any user, by means of a simple set of tools, can develop a spectacular game. There is no real alternative to Kodu Game Lab, but you can always create games using Amazon Lumberyard, Godot, Cocos Creator, Unity, Unreal Engine, Cry Engine and even Blender with bundled game engine. Free descargar kodu game lab Download - descargar. It runs on Xbox 360 and Windows, includes an interactive terrain editor for creating worlds of arbitrary shape and size, a bridge and path builder, and offers 20 characters with different abilities. Cons Limited to the in game robots. There is a wide selection of things characters in your game can do, such as Jump, Eat, Score Points, Play Sounds, Play Music, Glow, Talk, the list goes on and on. A graphics card that supports DirectX 9. Cons: 10 characters minimum Count: 0 of 1,000 characters 5. Please submit your review for Kodu 1. Microsoft Kodu Game Lab provides an end-to-end creative environment for designing, building, and playing your own new games. We also get your email address to automatically create an account for you in our website Softfully. Even if your phone is turned off, WhatsApp will save your messages and display them as soon as you're back online. While not as general-purpose as classical programming languages, Kodu can express advanced game design concepts in a simple, direct, and intuitive manner.
OPCFW_CODE
what I am thinking is that it might be that the interface is seen but not being interfaced to correctly, which can happen with the right firmware but the wrong version of it, and that is why i am curious if the interface can scan for access points. I had a similar problem with my old work laptop (thank god the new one came with an intel pro wifi card) where the firmware was correct but there was a newer version of it that i needed, though they weren't referred to as different version numbers just as a 1309 and 1309a ..... thus broadcom infinitely fails at life. <br><br>-Adam<br><br><div><span class="gmail_quote">On 5/24/07, <b class="gmail_sendername">terry</b> <<a href="mailto:email@example.com">firstname.lastname@example.org</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> On 5/24/07, terry <<a href="mailto:email@example.com">firstname.lastname@example.org</a>> wrote:<br>> On 5/24/07, Adam Miller <<a href="mailto:email@example.com">firstname.lastname@example.org</a>> wrote:<br>> > when you type "ifconfig" in the terminal does the interface show up <br>><br>> Yes, it shows up as eth1 (eth0 is the wired NIC)<br>><br>> > (normally as eth1 or wlan0) ... if so does "sudo iwlist <the interface name<br>> > here> scan" give you a display of wifi networks in range (that command <br>> > should be entered in the terminal as well)<br>><br>> Did not try "sudo iwlist eth1 scan"<br>><br>I did however try manually setting essid:<br> sudo iwconfig eth1 essid linksys<br><br>And then ran <br> sudo dhclient<br>It did not find dhcp server and did not acquire IP info. So I<br>manually assigned IP info.<br> sudo ifconfig eth1 <a href="http://192.168.1.29">192.168.1.29</a><br>And then tried to ping the WiFi router: <br> ping <a href="http://192.168.1.1">192.168.1.1</a><br><br>But nothing....<br>--<br> <><<br><br>--<br>xubuntu-users mailing list<br><a href="mailto:email@example.com">firstname.lastname@example.org </a><br>Modify settings or unsubscribe at: <a href="https://lists.ubuntu.com/mailman/listinfo/xubuntu-users">https://lists.ubuntu.com/mailman/listinfo/xubuntu-users</a><br></blockquote></div><br><br clear="all"><br>-- <br>
OPCFW_CODE
This is a fantastic opportunity to leapfrog into a rewarding career. Join a fun and flexible workplace where you'll enhance your skills and build a solid professional foundation. As a software consultant you will work on the next generation of cloud-based business software applications. You will work in an innovative environment and be part of a highly experienced and motivated team. You will work on real customer projects gaining invaluable experience and perspective. You will interact directly with customers, work under the guidance of a very senior expert team and learn from the best. You will gain exposure to software applications for the business that opens up doors to some of the most lucrative career paths. Your duties and responsibilities in this role will consist of: - Get an opportunity to learn industry best practices and business processes - Assist in requirements gathering, business process modelling, Fit-Gap analysis, and requirements documentation - Configure and map Customer requirements into Cloud software solution(s). - Contribute to the build of functional prototypes and software solutions as part of innovation projects - Assist in User Acceptance Testing, write test cases and test scripts - Change management - evaluate and assess the impact of a change from process, cost, efforts, and value realization perspective - Stakeholder management - based on the impact on stakeholders of a change or process remodelling, assess the impact on stakeholders, understand their perspective, manage expectations, and evaluate the best way to manage their interests - Collaboration with users, clearly understanding the business needs and be able to articulate the requirements, wants and opportunities - Identify risks and assist in mitigation planning - Learn how Agile methodology is executed on projects - Work on various tools that support our processes Requirements and prospects Desired skills - Strong analytical and problem-solving skills - Understanding of agile software development concepts and working in a team environment - Inclination and willingness to learn business processes, some experience will be an added advantage, although not mandatory - Strong communication skills - Attitude to learn new things and quickly adapt to change - A team player willing to learn and be part of a distributed team - Ability to self-manage tasks to get the job done - A passion for finding solutions to customer problems through technology - 5 GCSEs grades A*-C/9-4 or equivalent (including English Language and maths) Things to consider - The role offers long term security and the opportunity to progress into a permanent position - A great working environment to learn and grow your skills - Mentorship on the basics of working in software consulting teams - A substantial project that will properly represent your skillset to future employers. - Investment in formal training and development - Autonomy freedom and chance to make a real impact - This opportunity allows you to work on real problems, with real customers and see your work get released into a real product. - Success is what you make it. At Notion Edge, we help you make it your own. - A career in the SAP ecosystem can open many doors for you. If you're searching for a company that's dedicated to your ideas and individual growth, recognizes you for your unique contributions, fills you with a strong sense of purpose, and provides a fun, flexible and inclusive work environment.
OPCFW_CODE
Deploying Sharepoint WSP Theme to Subsite I'm a little new to all this theming, particularly in 2013, so please bear with me. I created a theme for SP2013. It's pretty similar to the default office one but with custom colours and a background image. Once I'd created the background image and .spcolor file, I uploaded them to a blank site, tested them out, and then used Design Manager in Sharepoint to export the branding as a WSP. I have been successfully deploying this branding around different sites, but I have a major issue: the branding doesn't work with Subsites. There may be a way to do it which I'm not seeing, and if there is please advise, but it is not appearing in the themes list on 'Change the Look'. I would have expected one of the following to be true: The theme is fully inherited from the main site by the sub site, and works out of the box. - this is not the case. The theme's existence is inherited by the sub site, but it needs to be manually activated in Change the Look. - Also not true, it does not appear in Change the Look. The WSP needs to be uploaded and activated on the sub site, too. - Also not true; when I go into the settings of the sub site there is no 'Solutions' link so I am not able to upload the WSP - is this something that should be the case? I wouldn't mind having to upload the WSP for every site, but not even that seems doable, so I've no idea how I deploy the theme to sub sites. Hoping someone can help me out. I'm on Sharepoint Online/Office 365. The root site collection is still on 2010 experience mode, but our SP account has been upgraded and several site collections are now on 2013. Thanks in advance for any help. Well in SP 2010, there was a publishing feature when you turn it on.. It gave you MasterPages settings link in Site settings.. Where you have a checkbox to make sub-sites inherit same Master Page as the main website! I am not sure if it is available in SP 2013 @ArsalanAdamKhatri, I did try that. It didn't inherit :/ Have you tried using the background image and theme in each subsite rather than trying to use the WSP? This sounds like a limitation of the Design Manager export process to me. There is a list on every site called "Composed Looks" which determines what you will see on the "change the look" page. My guess is your WSP created the composed look list item on the root web only. Look at site settings -> composed looks on both your subsite and root web to verify that. You may need to create the composed look yourself by hand on every subsite where you intend to deploy this theme. The Composed Looks list is a little picky in that you must select a master page from the current site, and the spcolor file from the root web's theme gallery. Any deviation from that, or if any link is incorrect, and the theme will not appear in Change the Look. You are right that you cannot deploy a WSP to a subsite; it is a site-collection-level solution only. And for what it's worth my current client is also on upgraded O365 in 2010 mode, and this process works there. clarification: by that last sentence I mean that creating the composed look by hand on the subsites works in my environment.
STACK_EXCHANGE
/* Calculator: Given an arithmetic equation consisting of positive integers, +, -, * and / (no parentheses), compute the result. EXAMPLE Input: 2*3+5/6*3+15 Output: 23.5 Solution: Priority: - Division, Multiply - Add, Subtract Use two stacks: one for numbers and one for operators. - Each time we see a number, it gets pushed onto numberStack. - Operators get pushed onto operatorStack as long as the operator has higher priority than the current top of the stack. If priority(currentOperator) <= priority(operatorStack.top()), then we "collapse" the top of the stacks: -- Collapsing: pop two elements off numberStack, pop an operator off operatorStack, apply the operator, and push the result onto numberStack. -- Priority: addition and subtraction have equal priority, which is lower than the priority of multiplication and division (also equal priority). This collapsing continues until the above inequality is broken, at which point currentOperator is pushed onto operatorStack. At the very end, we collapse the stack. */ // g++ 26-Calculator.cpp --std=c++14 #include <iostream> #include <vector> #include <stack> using std::cout; using std::endl; using std::stack; using std::vector; using std::pair; using std::make_pair; enum operation { op = 0, sub = 1, add = 2, mul = 3, divi = 4, num = 5, }; class Calc { stack<double> numberStack; stack<int> operatorStack; int priority( int op ) { switch( op ) { case sub: return 1; case add: return 1; case mul: return 2; case divi: return 2; } return 0; } void collapse() { int op = operatorStack.top(); operatorStack.pop(); int num1 = numberStack.top(); numberStack.pop(); int num2 = numberStack.top(); numberStack.pop(); switch( op ) { case sub: numberStack.push( num2 - num1 ); return; case add: numberStack.push( num2 + num1 ); return; case mul: numberStack.push( num2 * num1 ); return; case divi: numberStack.push( num2/num1 ); return; } } void insertOperatorToStack( int op ) { if( operatorStack.empty() ) { operatorStack.push( op ); return; } if( priority( op ) > priority( operatorStack.top() ) ) { operatorStack.push( op ); return; } collapse(); insertOperatorToStack( op ); } double doEvalExpr() { while( !operatorStack.empty() ) { collapse(); } double res = numberStack.top(); numberStack.pop(); return res; } public: Calc() {} double evalExpr( vector< std::pair< int, int > > expression ) { double result; for( auto token : expression ) { if( token.first == num ) { numberStack.push( token.second ); } else { insertOperatorToStack( token.second ); } } result = doEvalExpr(); return result; } }; int main() { /* expression to be evaluated: 2-6-7*8/2+5 */ vector< std::pair< int, int > > expression = { make_pair(num, 2), make_pair(op, sub), make_pair(num, 6), make_pair(op, sub), make_pair(num, 7), make_pair(op, mul), make_pair(num, 8), make_pair(op, divi), make_pair(num, 2), make_pair(op, add), make_pair(num, 5) }; Calc calc; cout<< calc.evalExpr( expression ) << endl; return 0; }
STACK_EDU