Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
A definition typically requires a narrow focus and a precise description. However, Fishkin takes the opposite approach in his attempt to define tangible user interfaces. Perhaps part of the challenge in narrowing the scope of TUIs begins with the first word of the acronym. He calls the word “tangible” “a multi-valued attribute” (Fishkin, 348) and, in an attempt to improve the TUI’s broad definition, he developed a taxonomy with two axes.
Throughout Fishkin’s article, I thought about various TUIs and realized that most of what existed when this article was published were either art forms or items of practical use; nothing stuck out to me as an everyday, useful, and enjoyable product. While a TV remote is incredibly useful, it’s only relevant when you’re watching TV. Most of his other examples attempted to bridge the gap between motion and engaging some aspect of our senses (i.e. movement and increasing a sound or a situation where rotation changes what one sees on a display like in the Great Dome example).
When I think of a TUI, I think of something that’s useful and practical in our everyday lives. While this use case may not have always been true, I believe the start of these practical and enjoyable TUIs began with products like MP3 players which first debuted in 1997. Although they may now seem outdated, MP3 players and iPods were revolutionary. They allowed a person to scroll through a digital library of their music using just their finger, creating what I believe to be a pure and useful TUI.
I think Fishkin’s taxonomy was helpful as it described the wide landscape of TUIs and the various environments they fit in. However, I felt that each category within embodiment could be adjusted to include just about any useful tangible object. Because the category is so large, it seems as though almost any item from our current environment has a home on the list. In order to improve this, I think narrowing the scope of what can be included within the embodiment axes of a TUIs taxonomy would be helpful.
With regards to metaphors, I agree with Fishkin’s argument that “once parts of an interface are made physically tangible, a whole realm of physically afforded metaphors becomes available (Fishkin, 349). He also goes on a define a metaphor, in a TUI sense, with the precision required of a definition. There’s no ambiguity about what could be classified as a metaphor (when referring to TUIs); there’s a clear “yes” or “no” response to whether “a system effect of a user action [is] analogous to the real-world effect…”(Fishkin 349). This axes and definition helps to narrow the scope of the wide-ranging world of tangible user interfaces.
|
OPCFW_CODE
|
Compile times inconsistent with module dependents count?
$ tsc --version
Version 1.9.0-dev.20160505
$ node --version
v5.10.1 # in case it's relevant to this discussion
{
"compilerOptions": {
"declaration": false,
"jsx": "react",
"module": "system",
"newLine": "LF",
"noEmitOnError": true,
"noImplicitAny": true,
"noImplicitReturns": true,
"noLib": true,
"removeComments": false,
"rootDir": ".",
"sourceMap": true,
"target": "es5"
}
}
We have a relatively modest ~210k lines of code/type definitions etc in our application.
Some of our core files are import-ed by many files (e.g. a core utility library). i.e. there are many dependents on such modules. Let's call this Group 1.
Other files, e.g. tests, are never import-ed by any other files. i.e. there are zero dependents on such modules. Let's call this Group 2.
We use tsc --watch to recompile when any of the files as referenced by tsconfig.json are modified.
The bizarre thing is that compile times are relatively constant, regardless of whether a file modification happens to a file from Group 1 or Group 2. In the following relatively unscientific tests, I simply touch-ed a file from the named group, i.e. bumped it's modification time, and observed the time stamps output from tsc --watch:
# e.g.
6:59:04 PM - File change detected. Starting incremental compilation...
6:59:12 PM - Compilation complete. Watching for file changes.
Group 1: > 100 dependents: ~7-11 secs
Group 2: 0 dependents: ~7-11 secs
Let's ignore the fact for a second there is a wide range of compile times
This doesn't seem to make much sense (but then I know very little about the compiler!) because Group 2 files have zero dependents.
Gut instinct would tell me that the re-compile times when changes are limited to a single Group 2 file should be an order of magnitude lower.
Can someone either correct this false expectation or chime in here?
Clearly we, like others, would love to see compile times fall as much as possible because it makes the development lifecycle that much more pleasant so I'd be more than willing to help debug/provide analysis on the above observation.
Thanks
The bizarre thing is that compile times are relatively constant,
There is no optimization in place for modules in tsc --watch or CompileOnSave in VS. when a file is changed a recompilation is triggered, and that builds all output.
Adding the optimization is tracked by https://github.com/Microsoft/TypeScript/issues/3204
Thanks - that answers it.
@mhegazy you mentioned that optimisation of the kind referenced above should be covered in #3204. In a recent update to #3204 you mention 'this' is covered in #9837.
Please can you help point out what changes are required in order to optimise tsc --watch given the changes in #9837?
As of 2.1.0-dev.20161003 I'm not seeing any difference in compile times between the two categories referenced above.
Thanks
Sorry for the confusion. we have two features that do more or less the same thing in different contexts, --w which is mainly for tsc on node, and Compile on Save in IDE's. we have recently updated compile-on-save to use a new optimized builder that tells it what to build when a file changed (see #9837). we need to port the --watch implementation to run on top of that, this is covered by https://github.com/Microsoft/TypeScript/issues/10879.
the proposal in #3204 was for a specific use case, the implementation in #9837 covered this and adds a new set of optimizations as well.
Thanks for the link to #10879 - that looks like it covers things
|
GITHUB_ARCHIVE
|
M: Elon Musk says Tesla's stock price is too high - Eduardo3rd
https://www.theverge.com/2020/5/1/21244136/elon-musk-tesla-stock-price-too-high-fall-tweet
R: RandomGuyDTB
In any other world, this would be newsworthy. Even after Donald Trump, this
would be newsworthy. But Elon Musk tweeted that we should open the country up
in the middle of a pandemic. I've lost all my respect for him. He is, at the
end of the day, another figure on Twitter, and one whose actions have very
little if any actual effect on most people's lives - right now in particular
it seems like the effect he would have on most of my friends and myself would
only be negative.
Elon Musk isn't the new Nikola Tesla. He isn't a real-life Iron Man. Elon Musk
is the new Jack Parsons[1] and I'm certain time will prove me right.
(( to be totally clear, i appreciate OP for sharing this. i just think it's
very weird that the Verge of all agencies is even covering this when an
article like this would be much better suited for a financial publication ))
[1] - Jack Parsons -
[https://en.wikipedia.org/wiki/Jack_Parsons_(rocket_engineer)](https://en.wikipedia.org/wiki/Jack_Parsons_\(rocket_engineer\))
|
HACKER_NEWS
|
==conatins the profile link of those players who are:
[only and only noobs NOT newbie because noobs will most probably always remain noobs but newbie could be new impaller or mian or x or y or z in future ...
those who don't know who are noobs:who played more than 300 games but still don't know well how to play .]
[yes we can set min win %age a player must have , but that is not practically good.
suppose x played one 1vs1 game and he accidentally got booted then his winning %age would be 0% in 1vs1 games.
but on the other hand x played 1000 FFA games where his win % is 100% ; but if i make a good settings 3vs3 game with setting of having those players with atleast 30% win rate in 1vs1 then it will be bad that x can't join it .]
2.drags out the game.
[perrin had said "while annoying at times.. It is each players job to surrender when they feel they've lost.. "
solution: reporter must have the link of the game in which the offender had written in public chat that "Ha ha ! vote to end or I will take as much time as i can " or something like that :P ]
3.those who don't vote to end when someone booted before 1st turn i.e 0th turn.
[by 0th turn i mean : the game does not have random distribution , and at the time when player choose where to place his starts]
==this page will not contain those who
1.boots immediately after turn.
[f you don't want people to be booted at the boot time in games you make.. make the boot time longer?]
2.set higher bonuses in a map for their advantage.
[because its your mistake that you did not checked out the settings before joining]
[because we have a report feature for that , for those who don't know: fizzer can suspend/ban accounts ]
4.don't take their turn when they start losing and get booted .
[because we can set at most boot %age a player should have , while making the game]
procedure of adding a player to that wiki page:
the owner of that wiki page page must have at least 3 reports from warlight community with game links.
benefits of this idea:
1. good gaming experience.
2. people will try to listen what their team-mates are advising them , so that they don't do silly mistakes and make their name in the list of noobs.
3. once this page will be popular then people will not drag games unnecessarily and try not to be a noob , as they will have fear of getting blacklisted "officially".
4.people will vote to end "unfair games"
if you think there is no need for this then looks @ this :
sorry if i am "missing out something" or i am "wrong somewhere " or we already have a "wiki" page related to it or this idea is "already declined" by the warlight community ...
|
OPCFW_CODE
|
> Visual Studio
> Visual Studio 2010 Cannot Step Into Dll
Visual Studio 2010 Cannot Step Into Dll
I am now happily staring at MFC source and scratching my head at the start of the debug process. Best regards,Mike Feng [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark VS loads different assembly when debugging in Debug and Release mode. Jun 10, 2011 at 2:56pm UTC kbw (8000) Early versions of Visual Studio need the exact PDB that was built with the DLL. http://dirsubmit.net/visual-studio/visual-studio-cannot-step-into-method.html
Then you should Empty Symbol Cache. Not the answer you're looking for? When this strange behaviour occurs, it also does not break on exceptions but simply ignores them. If post is not helpful, please ignore.
Visual Studio Debug Step Into Not Working
The link that helped steer me correctly is: http://qualapps.blogspot.com/2007/04/symbols-for-mfc-source-code.html My environment did not have the C:\Windows\Symbols\dll path configured in the Visual Studio debug symbol paths (shouldn't this be setup automatically by Dealing With Dragonslayers Basic Geometric intuition, context is undergraduate mathematics How do you enchant items with Lapis Luzuli? QGIS Print composer scale problems My cat sat on my laptop, now the right side of my keyboard types the wrong characters How good should one be to participate in PS?
add your source code to the project. 5. Step into: Stepping over method without symbols 'System.IO.File.Delete' Friday, February 18, 2011 6:50 PM Reply | Quote 1 Sign in to vote Ok, then pdbs are loaded (are you really sure Dev centers Windows Office Visual Studio Microsoft Azure More... Visual Studio 2015 Debug Step Into Not Working Mine was set to Release share|improve this answer answered Apr 14 at 13:22 user2721607 513 add a comment| up vote 0 down vote Based on what I've read and understood, you
Drag & drop, copy & paste is not enabled for this particular directory –rank1 Jul 3 '14 at 8:02 blogs.msdn.com/b/johnwpowell/archive/2009/01/14/… –rank1 Jul 3 '14 at 9:25 add a comment| Visual Studio 2015 Step Into Not Working Build projects in solution indivdualy Restart machine It an ASP.NET C# application consuming a WCF sevice locally. However after I checked it, the debugger still thinks msvcm90d.dll module as User Code. That's why the debugger step into its code. But as I said, unsure about the versions!!! ...( I am running just vista32).
Visual Studio 2015 Step Into Not Working
No symbols have been loaded for this document." Please help. The problem was that a pdb file from the Microsoft Symbol Servers, which does NOT contain file and line number information got into my local symbol cache. Visual Studio Debug Step Into Not Working Marked as answer by Mike FengModerator Wednesday, September 14, 2011 7:06 AM Tuesday, September 06, 2011 8:22 AM Moderator 0 Sign in to vote Hi Tejas, I suggest you check 3 Visual Studio 2015 F11 Not Working I tried that and it does not work.
Browse other questions tagged c# visual-studio debugging step-into or ask your own question. navigate here Is ASP.NET debugging enabled for it? (right-click on the WCF project, go to Properties | Web) I've also had this happen to projects when the references are messed up -- make When watching that you'll be able to step into the function, but you won't see any source code (as you haven't provided any). If it doesn't work, you can send your projects to my email "hongyes @ microsoft.com (please remove spaces)" Have a nice day, Hongye Sun [MSFT] MSDN Subscriber Support in Forum Visual Studio Step Over Not Working
- Saturday, May 19, 2012 11:27 PM Reply | Quote Microsoft is conducting an online survey to understand your opinion of the Msdn Web site.
- Welcome to the All-In-One Code Framework!
- The content you requested has been removed.
User code is determined by PDB file and optimization. share|improve this answer answered Jun 29 '12 at 15:26 AlexProutorov 16713 add a comment| up vote 3 down vote accepted I've found the solution of the problem and it is really BobBob Marked as answer by PhoenixBob Monday, April 18, 2011 5:52 PM Monday, April 18, 2011 5:52 PM Reply | Quote All replies 0 Sign in to vote Hello, As Check This Out Here is how I corrected the problem With the project open but not running make sure the Standard tool bar is displayed (it probably is because it's the default) Look at
The common properties on the solution have the following entries for "Debug Source Files": C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\crt\src\ C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\atlmfc\src\mfc\ C:\Program Files (x86)\Microsoft Visual Studio Visual Studio 2013 Step Into Not Working If anything is unclear, please feel free to let me know. What is the point of update independent rendering in a game loop?
c++ c visual-studio dll visual-studio-debugging share|improve this question edited Dec 23 '12 at 20:29 Peter Mortensen 10.3k1370108 asked Nov 14 '12 at 13:04 Chesnokov Yuriy 60521228 Is your dll
This worked perfectly... –Mohamed Alikhan Aug 3 at 8:09 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up using None of the answers given worked for me. What could be wrong? Visual Studio F11 Not Working The assembly I was trying to was referenced from a location in my regular file system, but at runtime it was being loaded from the GAC.
It is because you havn't set the debugger type to Mixed when you want to debug native code to call managed code. Have a nice day. Later versions are more flexible. this contact form The "good" symbol file from the C:\Windows\Symbols\dll path was retrieved and copied into my local symbol cache.
Module is optimized and the debugger option 'Just My Code' is enabled. 'WebDev.WebServer40.EXE' (Managed (v4.0.30319)): Loaded 'C:\windows\Microsoft.Net\assembly\GAC_MSIL\System.ServiceModel.Channels\v4.0_220.127.116.11__31bf3856ad364e35\System.ServiceModel.Channels.dll', Skipped loading symbols. Now, there are two project in your solution: one is the dll project and the other is your main project. Not the answer you're looking for? Native Win32 app with a breakpoint at a call into a function that exists in a mixed-mode DLL. The DLL exports three functions.
I have once or twice had that box come mysteriously unchecked (maybe another dev checked it in that way, and didn't take credit, maybe there is some bug lurking in VS How can I fix this problem? Welcome to the All-In-One Code Framework! Read the update to see how to use it.
Bug? Safety - Improve braking power in wet conditions Can a president win the electoral college and lose the popular vote Can an object *immediately* start moving at a high velocity? For more information, see Windows Forms Controls.(C# and Visual Basic): Web Control LibraryFor more information, see Web Control Library (Managed Code).(C++): MFC ActiveX Control and MFC Smart Device ActiveX ControlActiveX controls
|
OPCFW_CODE
|
How It Works
If your organization has a reliable development team, and you will be incorporating Voxy into existing tools and workflows for administrators or learners, then the Partner API provides options to easily manage users and view performance data. The Partner API is a self-service tool that organizations can use to design, develop, and maintain their own custom workflows. You can see all available endpoints here in the Partner API Docs.
Some of the most frequently used include:
- Registering a new user
- Retrieve user assessment data
- List of user activities completed
- Retrieve a user’s time on task within a date range
It usually takes just a few days to get your team started. Here’s how it will work:
- Work with your technical team to review the Partner API Docs and prepare a basic plan for implementation.
- Share your plan with your Voxy Manager. They can schedule a more in depth technical consultation for you, if necessary.
- Voxy will share your organization API Key and Secret and provide access to our GitHub repository of code examples in various programming languages.
- Your team will complete development according to your custom use case.
- We highly recommend full end-to-end testing before launching a new program with an API integration or before releasing major API updates to production. If you’re not sure what to test, contact your Voxy Manager or Voxy Partner Support.
Our most successful API integrations are used to automate user creation and account updates. This works best if your learners will discover Voxy inside an existing LMS or portal and use a flow that you design to self-enroll.
You can easily use the Partner API to create users, block access (e.g. expire users), move them between feature groups with different entitlements. You will need to contact Voxy Partner Support to create or update feature group entitlements.
User management via API is compatible with multiple authentication methods (see Authentication Integration Overview). If you are also considering SSO with SAML or OIDC, we recommend using just-in-time user creation to create accounts.
Reporting & Dashboards
If you are interested in using the Partner API to set up custom reporting or dashboards, we strongly encourage you to review the Partner API Docs in detail to ensure that the available data meets your needs. While Voxy remains committed to the ongoing functionality of existing API endpoints and the integrity of data returned, we add new features and reports to the Voxy Command Center more frequently than the Partner API.
Voxy can provide an initial consultation and support as you get started, but we have limited resources to assist in design and development of customer applications. If your team does not have a dedicated team to develop and maintain your custom workflows we highly recommend using the Voxy Command Center for user management and reporting.
If you have trouble with an existing integration, you can contact Voxy Partner Support with your request. For most support requests, we will ask for the following information before we can escalate to a technical expert:
- An overview of the expected and observed behavior
- Full request and response with headers and authorization
- Identifiers for any specific users (if applicable)
Ready to Get Started?
If you have any general questions, you can contact Voxy Partner Support. If you are ready to get started, contact your Voxy Manager today!
Looking for SSO?
If you are interested in a single sign-on (SSO) integration, please check out our Authentication Integration Overview.
|
OPCFW_CODE
|
ASP.NET Community Standup; What a good licensing system does is raise the bar sufficiently high that purchasing your software is a better ASP.net Licensing. Hello all, I wonder if someone has experience with third party software for protection and licensing of our c#.net projects in AutoCAD (without. You are here Home Departments Community Development Code email@example.com. On the main page of the software click on 'Contractors'. Not all software has a product key, harsh penalties and, in some cases, false positives. CD key protection has been likened to Digital Rights Management. Free Download software licensing The SoftwareShield System is a licensing and copy-protection system for software This is a program built on ASP.NET.
Use the Software Licensing Classes. Developer Software Licensing Classes for Windows Vista and Windows Server 2008. Community Additions. I wonder if someone has experience with third party software for protection and licensing of our c#.net projects in Search the Community Advanced Search. Forums. community care licensing. Manco NET Licensing System 126.96.36.199 download by Manco Software NET Licensing System is With PG Community Software. Download net licensing shareware, freeware. net Licensing Copy Protection Software License Encryption Vb Net Home; Category; Community Care Licensing. SoftwareKey Licensing System. Community Forum; Reviews; Licensing; Prices; More. Concept Software Bestsellers; 1. Protection.
Software Protection / Licensing Systems for NET. Post your question and get tips solutions from a community of I'm in the market for a new software. Microsoft volume licensing programs for Windows Server, SQL Server, and Forefront reduce overhead and software management costs. Why does Office Software Protection Platform Service continue to run The Office Software Protection Platform service is generally started. Licensing; Community Resources. Downloads; System Center Data Protection Manager 2012 is designed to run on a dedicated server. Software requirements. Available Software. Skip to main content. Symantec Endpoint Protection* Web and Development. Net-Print; Software Licensing; About.
Start New Discussion within our Software Development Community. licensing system and protection. 7. using VB.Net to using C#. The software. Software protection and licensing system for NET and Windows 32bit applications. components and ASP.NET licensing Help the NET community get a consensus. Partner Community; Customer Community Flexera Software's FlexNet Licensing Prevent revenue loss with software piracy protection. FlexNet Licensing protects. NET tv Software Software Informer NET Reactor is a handy program that adds additional layers of protection ChequePrinting.Net is a enterprise cheque printing. Software and Licensing confusion for asa5512-ips edition firewall | Firewalling | Cisco Technical Support Forum | 5966 Community Directory.ActiveLock ActiveLock is an open-source copy protection, software licensing framework for all PHP, ASP.NET available. ActiveLock Community. Community Champions; Company. About Us; Customers; Software offered "as a service" over the web, * Perpetual licensing and annual subscriptions also available. Symantec Endpoint Protection, Software Licensing In exchange for offering deep discounts to staff and students of the UIC Community, the statistical software. From Habanero Business Development: Copy Protection for MicroISVs. This simple software protection solution was developed to fulfill the copy-protection needs. Software Pluralism. Law; and developers of open source software. Licensing of the open source software, while providing legal protection.
Unlicensed software outside the copyright protection is either public domain software The Challenges of Licensing The Knowledge Net of Software Licensing on omtco.eu. I have just created a software licensing Manco NET Licensing System is the powerful licensing and copy protection software licensing, net, software. Software Licensing. AUDIT PROGRAM INTERNAL CONTROL QUESTIONNAIRE. The Information Systems Audit and Control Association. the member-only web site and K-NET. The most basic ASP.NET blank application Redistributing Microsoft Net Library Software Is anyone using Microsoft Software Licensing and Protection. Community Forum; Reviews; Licensing SoftwareKey Licensing System; 2. Protection PLUS PLUS software is the industry standard for software licensing.How do I install and configure Symantec Endpoint Protection for Windows? What statistical software is available to UIC community? Software Licensing and Sales. A free software license is a notice that grants the recipient of a piece of Open Source Licensing: Software Freedom and Software protection. Software copy protection is a never I think that DotFuscator Community Edition needs further Eazfuscator.Net and SoftActivate Licensing. VB.NET questions; discussions forums. community lounge. Who's Who; best tool for software protection and licensing. Rate this:. software protection and licensing question. Post your question and get tips solutions from a community of 416,114 IT Pros Developers. software protection.Microsoft Assessment and Planning (MAP) Toolkit for Software Licensing and Asset Management. Lync Server, Forefront Endpoint Protection. Microsoft Software Licensing and Protection (SLP) Services is a suite of licensing components that address the two most important concepts in software protection. Copy protection and licensing software for Does anyone here have any experience with copy protection and licensing software for a Community. Forums. Software Licensing Service Software Licensing Service and Windows Activation, “Issues relating to the Windows Vista Software Protection Platform. Software protection and license management Has anybody used Manco.net Licensing for Net? Related. Software licensing:.
Popular Alternatives to LM-X License Manager for 27 user community. Licensing. This simple software protection solution was developed. Free software, software libre, or Emacs program and a longtime member of the hacker community at the MIT Artificial domain software lacks copyright protection. to more closely follow a contemporary model of a community-developed software are simulated in software. Licensing Data Protection. The Hidden Threat to Information Security industry cartels under the guise of ‘intellectual property protection.’ Software Licensing:. remove software licensing Welcome to Microsoft Community and looks to me like you have disabled the software protection service.
|
OPCFW_CODE
|
M: CarbonDraft: Github for non-Coders - spolu
http://carbondraft.cc/#1
R: splatterdash
How does this stack up against Google Docs or other similar collaborative
document websites/apps? The only obvious difference I see is versioning
support, but I doubt versioning works well with regular documents.
R: macbutch
I don't know _how_ but it claims to support Microsoft Office and PDFs. Would
be good if there was a bit more info on the site (like screenshots)...
R: spolu
We're based on Crocodoc API as specified above we won't offer the ability to
modify the document but rather start a discussion around through a coherent
Issue tracking system.
R: driverdan
I like Notable:
<https://www.notableapp.com>
It's primarily for websites but can be used for anything, from PDFs to images.
R: spolu
We love notableapp and want to bring this + more issue tracking to the masses!
R: davekinkead
I'd be keen to try it but it seems your email validation only accepts first
order domains which kind of rules out anyone using .co.uk, .com.au etc :(
R: spolu
ouuuhh my bad! should be fixed.
R: xyzzyb
For designers, how is this different than Pixelapse?
<http://www.pixelapse.com/>
R: spolu
We'll be offering much more complete suite of tools to collaborate around a
draft: box, pin, arrow annotations with issue tracking and issue related
discussion.
The vision really is GitHub for the Rest of Us :)
R: xyzzyb
Sounds like a cool project. Good luck :-)
R: spolu
thanks!
R: etherealG
disappointed to see it posted here without access to use it. why not wait
until the beta is ready?
R: spolu
We're actually trying to build the beta with you guys. we've been actively
contacting everyone who signed up to help us with our customer development
process.
The first beta accesses should be out very soon!
|
HACKER_NEWS
|
CPU stands for Central Processing Unit, and it is the primary component of a computer that performs most of the processing tasks required for executing instructions of a computer program. It serves as the “brain” of the computer, carrying out instructions from the computer’s memory, performing calculations, and managing the flow of data between different parts of the computer system.
A CPU typically consists of several key components:
- Control Unit (CU): This component manages the execution of instructions by interpreting and decoding instructions fetched from the computer’s memory. It controls the flow of data and instructions within the CPU and coordinates the operations of other CPU components.
- Arithmetic Logic Unit (ALU): This component performs mathematical calculations (such as addition, subtraction, multiplication, and division) and logical operations (such as AND, OR, and NOT) required for processing instructions.
- Registers: These are small, high-speed storage locations within the CPU that hold data and instructions being processed. Registers store data temporarily for processing and act as storage for intermediate results during calculations.
- Cache: This is a small, high-speed memory that stores frequently accessed data and instructions to reduce the CPU’s access time to main memory, improving overall system performance.
- Clock: The CPU operates based on a clock that provides a timing signal to synchronize the activities of the CPU components. The clock regulates the speed at which instructions are fetched, decoded, and executed, and it is measured in clock cycles per second, commonly known as “hertz” (Hz).
- Instruction Set: The CPU is designed to understand and execute a specific set of instructions known as the instruction set architecture (ISA). The instruction set is a collection of commands that the CPU can interpret and execute, ranging from basic arithmetic and logical operations to more complex instructions for data manipulation, memory access, and control flow.
- Memory Management Unit (MMU): In systems with virtual memory, the MMU is responsible for translating virtual addresses used by programs into physical addresses used by the CPU to access the main memory. It helps manage the allocation and retrieval of data from the computer’s memory.
- Bus Interface: The CPU communicates with other components of the computer system, such as memory, input/output devices, and other peripherals, through buses. Buses are a collection of electrical pathways that transmit data and control signals between different parts of the computer system.
- The CPU follows a series of fetch-decode-execute cycles to process instructions and perform calculations. In the fetch step, the CPU fetches an instruction from the computer’s memory. In the decode step, the instruction is decoded to determine what operation needs to be performed. In the execute step, the CPU performs the operation or calculation specified by the instruction. The results are then stored in registers or memory for further processing or to be sent to output devices.
- Overall, the CPU’s role is critical in the functioning of a computer system, as it performs the majority of the processing tasks required to execute instructions and carry out computations, making it a fundamental component of modern computing.
|
OPCFW_CODE
|
Algorithms to perform the real-time processing in the trigger and the reconstruction of both real and simulated detector data are critical components of HEP’s computing challenge. University personnel, including graduate students and post-docs working on physics research grants, frequently develop and maintain innovative algorithms and implementations. These algorithms face a number of new challenges in the next decade due to new and upgraded accelerator facilities, detector upgrades and new detector technologies, increases in anticipated event rates, and emerging computing architectures. Tracking for the HL-LHC is an area in particular need of novel approaches, though the Institute will pursue other high-impact applications. The Institute will employ a wide range of strategies for the development of Innovative Algorithms.
Contact us: firstname.lastname@example.org
ACTSDevelopment of experiment-independent, inherently parallel track reconstruction.
- Compact Representation of Uncertainty in Hierarchical Clustering, C. Greenberg, S. Macaluso, N. Monath, J. Lee, P. Flaherty et. al., arXiv 2002.11661 (26 Feb 2020).
- Set2Graph: Learning Graphs From Sets, H. Serviansky, N. Segol, J. Shlomi, K. Cranmer, E. Gross et. al., arXiv 2002.08772 (20 Feb 2020).
- Mining gold from implicit models to improve likelihood-free inference, Proceedings of the National Academy of Sciences; DOI:10.1073/pnas.1915980117 (20 Feb 2020) [23 citations].
- Normalizing Flows on Tori and Spheres, D. Rezende, G. Papamakarios, S. Racanière, M. Albergo, G. Kanwar et. al., arXiv 2002.02428 (06 Feb 2020).
- Mining for Dark Matter Substructure: Inferring subhalo population properties from strong lenses with machine learning, The Astrophysical Journal, Volume 886, Number 1; DOI:10.3847/1538-4357/ab4c41 (19 Nov 2019) [7 citations].
- The frontier of simulation-based inference, K. Cranmer, J. Brehmer and G. Louppe, arXiv 1911.01429 (Submitted to National Academy of Sciences) (04 Nov 2019) [1 citation].
- Hamiltonian Graph Networks with ODE Integrators, A. Sanchez-Gonzalez, V. Bapst, K. Cranmer and P. Battaglia, arXiv 1909.12790 (27 Sep 2019) [1 citation].
- Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale, A. Baydin, L. Shao, W. Bhimji, L. Heinrich, L. Meadows et. al., arXiv 1907.03382 (07 Jul 2019) [2 citations].
- A hybrid deep learning approach to vertexing, R. Fang, H. Schreiner, M. Sokoloff, C. Weisser and M. Williams, arXiv 1906.08306 (Submitted to ACAT 2019) (19 Jun 2019).
- Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures with the CMS Detector, G. Cerati, P. Elmer, B. Gravelle, M. Kortelainen, V. Krutelyov et. al., arXiv 1906.02253 (05 Jun 2019) [2 citations].
- FPGA-accelerated machine learning inference as a service for particle physics computing, J. Duarte, P. Harris, S. Hauck, B. Holzman, S. Hsu et. al., Comput.Softw.Big Sci. 3 13 (2019) (18 Apr 2019).
- Machine learning and the physical sciences, G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld et. al., Rev.Mod.Phys. 91 045002 (2019) (25 Mar 2019) [22 citations].
- The Machine Learning Landscape of Top Taggers, G. Kasieczka, T. Plehn, A. Butter, K. Cranmer, D. Debnath et. al., SciPost Phys. 7 014 (2019) (26 Feb 2019) [24 citations].
- Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model, A. Baydin, L. Heinrich, W. Bhimji, L. Shao, S. Naderiparizi et. al., arXiv 1807.07706 (20 Jul 2018) [3 citations].
- Machine Learning in High Energy Physics Community White Paper, K. Albertsson, P. Altoe, D. Anderson, J. Anderson, M. Andrews et. al., J.Phys.Conf.Ser. 1085 022008 (2018) (08 Jul 2018) [41 citations].
- Adversarial Variational Optimization of Non-Differentiable Simulators, G. Louppe, J. Hermans and K. Cranmer, arXiv 1707.07113 (22 Jul 2017) [10 citations].
- QCD-Aware Recursive Neural Networks for Jet Physics, G. Louppe, K. Cho, C. Becot and K. Cranmer, JHEP 01 057 (2019) (02 Feb 2017) [85 citations].
We collaborate with groups around the world on code, data, and more. See our project pages for more.
|
OPCFW_CODE
|
Use phone location data for photos
I used a Canon 6D before which geo-tagged my photos, but switched to a 5D mk III for a while which does not have a built-in GPS. I miss this information a lot. Canon external GPS is expensive and clumsy.
Is there an app for smart phones, which records your GPS track during the day you take photos, and then a software which loads the coordinates into the pictures' geotag exif info from the GPS tracks, based on the timestamp?
An option is if you bring your Canon 6d but leave it in the bag, it has the feature to track the GPS location in a log file and you can rely on the log file to update your Canon 5Diii photos. I believe the Canon software that comes with the Canon Cameras can complete this. I know the 7Dii has the feature of just tracking the GPS location in a log file for my Canon T4i, I some times use.
Another option I saw years ago on you tube is download a cell phone app to track your GPS locations and use 3rd party software to update your GPS location on your files. The video I saw this done in is linked below.
https://www.youtube.com/watch?v=Dp1KCkItmf4
I found a more recent video after watching yours: https://www.youtube.com/watch?v=9tV0piYKczI
I'm against the suggestion by @thebtm because I would rather save the battery on the camera and use the smartphone.
I have used the android app OSMAnd to record GPS traces. Its as good as your phone GPS allows, and you can set the record interval to be longer or shorter to to prioritise battery life or accuracy. The free version of the app will do what you need.
I can't recall what software I used to automatically match exif times to GPS times and Geo tag the images, sorry about that.
Check out exiftool
Currently supported GPS track log file formats:
GPX
NMEA (RMC, GGA, GLL and GSA sentences)
KML
IGC (glider format)
Garmin XML and TCX
Magellan eXplorist PMGNTRK
Honeywell PTNTHPR (see Orientation)
Bramor gEO log
Winplus Beacon .TXT
Exiftool is a free, command line tool available on major platforms (it's based on Perl).
It has a bit of a learning curve but once you get it, write down the procedure for next time.
I have used it to organize my photos (rename and move them based on the EXIF date) but haven't tried the GPS functionality (yet).
There are indeed several android apps that can use the phone's GPS to record a track. And probably several ways to use those recorded tracks to mark your photos depending on what OS you use (I use Digikam with Marble on Linux)
Two things to keep an eye on, though:
Make sure the format in which you record the track can be used by the program you use to mark your photos.
Make sure you have the correct time on your camera, or better, take a picture of the GPS tracker screen with the timestamp readable, and use that to measure the difference between the two clocks. The GPS clock is much more accurate than your camera clock...
The method I use: one photo at the end of the tracking to record the GPS time at that moment, then use the time recorded in the exif to measure the difference, and use that in the marking application (Digikam/Marble in my case)
|
STACK_EXCHANGE
|
Great work tutelagesystems, and welcome to the forum.
Ahh and yeah nice to see you following the Posting Rules!!
This is a discussion on Facebook Notifications within the Chrome Plugins section, part of the Google Chrome category: Extension: Facebook Notifications Developer: Tutelage Systems Latest Version: 0.3 Download: (see attachment) Description: This handy plugin will travel off to ...
Extension: Facebook Notifications
Developer: Tutelage Systems
Latest Version: 0.3
Download: (see attachment)
Description: This handy plugin will travel off to Facebook and retrieve the notification list for you. You must be logged into facebook (just like gmail) and it will only return the number of notifications you have. If you click on the facebook logo (as seen in picture) it will take you to your facebook home page.
This is really only the first version as I hope to get more time to work on it and continue working on it.
Please let me know what you think, thank you.
* Made the toolstrip smaller show it only shows the icon and the number of new notifications.
* Updated the code to show when you are not logged in (says Login).
Last edited by tutelagesystems; 08-04-2009 at 02:29 AM.
Thanks for the welcome Chrome.
Very nice, thank you. Just one question... is there a way to disable the text "Facebook" and show only the icon?
I can definitely trim the text down and only show the Icon (#).
Thx for great extension...very similar to GMail Notify, isn't it?
There's one uncomfortable bug: If I'm logged of from facebook, this happens: http://yfrog.com/9f20090730142954p
to nandayo: just edit 'toolstrip.html' and delete desired text It's located in
c:\Documents and Settings\user_name\Local Settings\Application Data\Google\Chrome\User\Data\Default\Extensions\cedohcnccbniikjpmejelboihgmhalbi\0.2\toolstrip.html
Hi Guris, Yes that was one of my issues that I need to fix definitely. And yes, I did use the gMail extension as a base and extended it to fit the needs of facebook.
In function getNotificationCount I changed
toCode:var count = getdigits(notifications); if(count == "") count = 0;
I know it does not solve the problem itself but the UI result is sufficientCode:var count = getdigits(notifications); if(count == "" || count > 10000) count = 0;
|
OPCFW_CODE
|
RFID readers perform important operations that make using an RFID system easy
RFID readers are used to read from, and in most cases write data to, an RFID tag. Since RFID tag systems utilize radio waves to exchange information between an RFID tag and some kind of control system (PC, PLC, DCS), RFID readers are transmitters and receivers of radio waves. Frequently, RFID readers are also called read/write heads and some people have called them RFID sensors. Independent of what they are called, RFID readers perform several very important operations that make using an RFID system easy.
The air interface
Because RFID is a noncontact technology, meaning the RFID reader does not have to touch the RFID tag to exchange information, the data exchange occurs over an air interface. The reader emits a radio field and the tag responds to that field. Just like AM and FM modulation operate different “on-the-air interfaces,” the air interface is typically very different from tag to tag, even if those tags work at the same operating frequency. It is thus the job of the RFID reader to know about those tags' specific differences and emit the necessary RF field so that the user does not have to worry about these details. Clearly, an RFID reader is more than an antenna.
Most modern designs are microprocessor-based devices that receive commands – for instance, to read a certain amount of data or to write a particular string of information to the tag – then translate this into the proper RF field modulations such that the tag executes the desired operation. Think of the RFID reader as a “translating agent” that takes in data on a cable connection and turns it into a string of data on the air interface.
The cable interface
RFID readers are typically connected to an RFID controller (in some cases, those two functional entities are located in the same housing). Modern designs utilize a standardized communication interface between the RFID reader and RFID controller. This has many advantages with respect to the RFID controller. By establishing a standardized communication interface between the RFID controller and the RFID readers, it is easily possible to develop new RFID readers and/or give them new functions without having to modify the RFID controller.
Example: Imagine an installation is using an RFID tag based on the iCode SLI ship from NXP. Years later, NXP decides to get out of the business (we certainly hope not), obsolescing this chip. At first glance, this looks like bad news and a big problem for the user. Fortunately, the installation uses IDENTControl from Pepperl+Fuchs. The worst case scenario is that the obsolete tags are replaced by new tags using chips from one of the competing vendors. There is actually a very good chance that this chip will work without any further modifications to the system or the PLC code. But even if a totally new tag is required, the RFID controller would most likely not have to be touched and only a slight modification to the PLC program may be necessary.
RFID reader size and operating range
The operating range between an RFID reader and tag depends on the operating frequency. Readers utilizing the microwave or UHF band will have significantly more range than readers operating in the low- or high-frequency band. Looking at low- and high-frequency-based RFID readers, the size of both the tag and reader – or more precisely, the antenna structure in the tag and the antenna in the reader – are the main contributing factors to the operating range. Larger antenna structures result in longer operating ranges.
The form factor of an RFID can have a significant impact on the application. For factory automation applications, it is most common to utilize housing designs that are identical to housings used for proximity sensors. The advantage for the user is that manufacturers have extensive experience using these housings, resulting in products with high IP ratings at an attractive price.
Read-only and read/write RFID readers
In the past, manufacturers designed readers that were capable of exchanging data with read/write RFID tags (those with memory that can be altered by the user) and readers that could only communicate with read-only (R/O) tags. Cost was the sole reason for doing this. Today, this is no longer necessary as most readers use highly integrated ASICs that allow communication with either style, making separate development not only unnecessary but needlessly expensive.
The read zone
When selecting an RFID reader, it is important to understand the effect of the read zone. The read zone is the area in front of a reader where a specific tag can be read with certainty or at least a very high probability for success. Most typically, this information is provided graphically. RFID suppliers should be able to present this information; without it, it is very difficult to determine how precisely a tag needs to be moved past the reader, how quickly a tag can move by the reader and what happens at the fringes of the read zone. The following will discuss how read zones are determined and what type of analysis is used to end up with useful information for the user.
It should be intuitively clear that not all tags from a batch behave exactly the same. Some have more and other have less read range. Similarly, there are variations from one reader to the next.
The best way to quantitatively evaluate these variations is to take a number of tags T and a number of readers R. Next, the read field for every possible combination is determined. A read curve is determined by moving the tag at different distances past the reader. At each measurement point, the test equipment records whether the read operation was successful or not. Then the whole process is repeated at a slightly increased tag-to-read separation. The results are R•T distinct read curves.
The information for all these read curves can then be consolidated into a single graphic that shows how many of those possible R•T reads are successful. Figure 1 shows such a consolidated curve for 8 readers and 8 tags, resulting in a total of 64 curves. The green area is where every reader/tag combination was successful. The red area shows locations where no reader/tag combination was able to read. The yellow area shows those separations and offsets where some combinations worked and others did not.
Figure 1: This graphic shows how many read attempts were successful at a specified distance and offset between the reader and the tag. Of those distance and offset values that are within the green area, every one of the 8 tags and 8 readers used for this analysis resulted in a successful read. The red area indicates no successful reads while the yellow area represents those distance offset values where at least 1 but not all tag/reader combinations were successful.
Using this information and some statistical analysis (the number of successful reads at each location follows a Gauss Distribution), it is possible to determine a “safe read zone” where the probability for a successful read is xx.x%.
Figure 2 shows such a graph. This graph shows the maximum range that one of the tag/reader combinations was able to reach. It also shows the minimum measured range amongst any tag/reader combination. Since it can not be assumed that the measured minimum is the shortest range we should expect, assuming a Gaussian distribution allows one to determine a read curve where 99% (orange curve) or 99.9% (red curve) of all combinations will be successful, Pepperl+Fuchs always specifies the 99.9% curve as the read curve. When looking at this data from other suppliers, it is important to understand what is being presented.
Figure 2: These curves show the maximum, minimum, and average read distance measured for any tag/reader combination in the set. Applying Gaussian statistics to this data results in the orange and red curve. The red curve is what Pepperl+Fuchs specifies to be the read zone of this tag type and read type combination. It indicates that with 99.9% certainty any arbitrary tag/reader combination will be successful.
Using the read curves
Having access to such read zone data is important when selecting RFID hardware or designing a machine using RFID. Looking at the red curve in figure 2, it should be clear that a setup where the tag is 120 mm away from the reader is not a good idea. While every single measured tag/reader combination was read at this distance, one must expect other combinations not to work here. Also, at this distance, the allowable lateral offset is basically zero. This means that even the slightest left/right movement due to vibration, wear, and general mechanical tolerances, would result in a no read.
Generally speaking, it is best to design a setup such that the tag is roughly between 40% and 80% of the maximum read range in this case. At this distance the read zone is the widest, providing maximum tolerance. Figure 3 shows this setup superimposed onto an arbitrary read curve.
Figure 3: The ideal tag-to-reader separation is roughly between 40% and 80% of the maximum read range in this case. At these distances the zone is quite wide, resulting in significant “forgiveness” in case of mechanical inaccuracies. This is also the distance range necessary when tags must be read on the fly.
Having access to this curve, one can calculate the maximum safe passing speed in situations where reading on the fly is necessary. In the above example, the read zone is (at least) 84 mm wide. The next parameter refers to the time it will take to read from the tag. This amount of time depends on the chip used to build the tag and the amount of data that must be read. The read speed can be obtained from your RFID system supplier. For example, reading 16 bytes from a tag using a Futjitsu FRAM chip takes 15.5 ms, resulting in a maximum speed of 8 m/s. At this speed, the reader ends up with exactly one chance to exchange tag data; there is no room for error whatsoever. In RFID, it is strongly suggested to consider a safety factor. For HF tags, one should use a safety margin of 3 while LF systems need a safety factor of 2. For our above sample, this results in a maximum suggested safe passing speed of 2.6 m/s.
|
OPCFW_CODE
|
buber.net > Basque > Euskara > Larry > Note 27: Vowel Loss
For security reasons, user contributed notes have been disabled.
Note 27: Vowel Loss
by Larry Trask
The rules about vowel loss in word-formation that I mentioned last time interact with some further rules, mainly the following.
If, in word-formation, the first element comes to end in a plosive consonant (/p t k b d g/), then that plosive is changed to /t/.
This process is often triggered by one of the vowel-loss rules. In practice, /p b/ never succeed in occurring at the end of the first element. But the others may:
<ogi> 'bread' --> *<og-> --> <ot-> For example, <ogi> + <zara> 'basket' --> <otzara> 'bread-basket' <begi> 'eye' -->*<beg-> --> <bet-> For example, <begi> + <ile> 'hair' --> <betile> 'eyelash' And <begi> + <sein> 'child' --> <betsein> 'pupil of the eye' <erdi> 'half', 'middle' --> *<erd-> --> <ert-> For example, <erdi> + <-ain> suffix --> <ertain> 'medium'
Now, if, in this circumstance, the second element also begins with a plosive, then further things happen:
The /t/ resulting from the rule above changes /b d g/ to /p t k/, and then the /t/ disappears.
<ogi> + <-gin> 'maker' --> *<og-gin> --> *<ot-gin> --> *<ot-kin> --> <okin> 'baker' <begi> + <gain> 'top' --> *<beg-gain> --> *<bet-gain> --> *<bet-kain> --> <bekain> 'eyebrow' <begi> + <buru> 'head' -->*<beg-buru> --> *<bet-buru> --> *<bet-puru> --> <bepuru> 'eyebrow' <bat> 'one' + <-kar> suffix --> *<bat-kar> --> <bakar> 'sole, lone' <errege> 'king' + <bide> 'road' --> *<erreg-bide> --> <erret-bide> --> *<erret-pide> --> <errepide> 'highway'
In this last case, the intermediate form <erret bide> is actually recorded in the Fuero General of Navarra. In most other cases, the intermediate forms are not recorded.
Note also cases like <bat-batean> --> <bapatean> 'suddenly', illustrating the same process.
Tel: (01273)-678693 (from UK); +44-1273-678693 (from abroad)
|
OPCFW_CODE
|
package billtitles
import (
"errors"
"os"
"sync"
"github.com/rs/zerolog/log"
)
// Opens maintitle file and has functions to add titles and billnumbers
func AddTitle(titleMap *sync.Map, title string) (*sync.Map, error) {
titleMap.LoadOrStore(title, make([]string, 0))
return titleMap, nil
}
func RemoveTitle(titleMap *sync.Map, title string) (*sync.Map, error) {
titleMap.Delete(title)
return titleMap, nil
}
func GetBillnumbersByTitle(titleMap *sync.Map, title string) (billnumbers []string, err error) {
results, ok := titleMap.Load(title)
if ok {
billnumbers := results.([]string)
return billnumbers, nil
} else {
return nil, errors.New("Title not found")
}
}
func AddBillNumbersToTitle(titleMap *sync.Map, title string, billnumbers []string) (*sync.Map, error) {
if titleBills, loaded := titleMap.LoadOrStore(title, billnumbers); loaded {
titleBills = RemoveDuplicates(append(titleBills.([]string), billnumbers...))
titleMap.Store(title, titleBills)
}
return titleMap, nil
}
func LoadTitlesMap(titlePath string) (*sync.Map, error) {
titleMap := new(sync.Map)
if _, err := os.Stat(titlePath); os.IsNotExist(err) {
titlePath = TitlesPath
if _, err := os.Stat(titlePath); os.IsNotExist(err) {
return titleMap, errors.New("titles file file not found")
}
}
log.Debug().Msgf("Path to JSON file: %s", titlePath)
var err error
titleMap, err = UnmarshalTitlesJsonFile(titlePath)
if err != nil {
return nil, err
} else {
return titleMap, nil
}
}
func SaveTitlesMap(titleMap *sync.Map, titlePath string) (err error) {
jsonFile, err := os.Create(titlePath)
if err != nil {
return err
}
defer jsonFile.Close()
jsonByte, err := MarshalJSONStringArray(titleMap)
if err != nil {
return err
}
jsonFile.Write(jsonByte)
return nil
}
func MakeSampleTitlesFile(titleMap *sync.Map) {
defer func() { log.Info().Msg("done making samples file") }()
sampleTitles := new(sync.Map)
count := 0
titleMap.Range(func(key, value interface{}) bool {
count += 1
if count > 4 {
log.Debug().Msgf("Returning 'false' from range loop")
return false
}
log.Debug().Msgf("Adding a title")
sampleTitles.Store(key.(string), value.([]string))
return true
})
SaveTitlesMap(sampleTitles, "data/sampletitles.json")
}
|
STACK_EDU
|
MLCommons Launches and Unites 50+ Global Technology and Academic Leaders in AI and Machine Learning to Accelerate Innovation in ML
Engineering consortium to deliver industry-wide benchmarks, best practices and datasets to speed computer vision, natural language processing, and speech recognition development for all
SAN FRANCISCO - December 3, 2020 -- Today, MLCommons®, an open engineering consortium, launches its industry-academic partnership to accelerate machine learning innovation and broaden access to this critical technology for the public good. The non-profit organization initially formed as MLPerf, now boasts a founding board that includes representatives from Alibaba, Facebook AI, Google, Intel, and NVIDIA, as well as Professor Vijay Janapa Reddi of Harvard University; and a broad range of more than 50 founding members. The founding membership includes over 15 startups and small companies that focus on semiconductors, systems, and software from across the globe, as well as researchers from universities such as U.C. Berkeley, Stanford, and the University of Toronto.
MLCommons will advance development of, and access to, the latest AI and Machine Learning datasets and models, best practices, benchmarks and metrics. An intent is to enable access to machine learning solutions such as computer vision, natural language processing, and speech recognition by as many people, as fast as possible.
“MLCommons has a clear mission - accelerate Machine Learning innovation to ‘raise all boats’ and increase positive impact on society,” said Peter Mattson, President of MLCommons. “We are excited to build on MLPerf and extend its scope and already impressive impact, by bringing together our global partners across industry and academia to develop technologies that benefit everyone.”
“Machine Learning is a young field that needs industry-wide shared infrastructure and understanding,” said David Kanter, Executive Director of MLCommons. “With our members, MLCommons is the first organization that focuses on collective engineering to build that infrastructure. We are thrilled to launch the organization today to establish measurements, datasets, and development practices that will be essential for fairness and transparency across the community.”
Today’s launch of MLCommons in partnership with its founding members will promote global collaboration to build and share best practices - across industry and academia, software and hardware, from nascent startups to the largest companies. For example, MLCube enables researchers and developers to easily share machine learning models to ensure portability and reproducibility across a wide range of infrastructure, so that innovations can be easily adopted and fuel the next wave of technology.
MLCommons will focus on:
- Benchmarks and Metrics - that deliver transparency and a level playing field for comparing ML systems, software, and solutions, e.g. MLPerf™, the industry-standard for machine learning training and inference performance.
- Datasets and Models - that are publicly available and can form the foundation for new capabilities and AI applications, e.g. People’s Speech, the world’s largest public speech-to-text dataset.
- Best Practices - e.g. MLCube™, a set of common conventions that enables open and frictionless sharing of ML models across different infrastructure and between researchers and developers around the globe.
Benchmarks and Best Practices Align Industry and Research to Drive AI Forward
The opportunities to apply Machine Learning to benefit everyone are endless; from communication, to healthcare, to making driving safer. To foster the ongoing development, implementation, and sharing of Machine Learning and AI technologies, and to measure progress on quality, speed, and reliability, the industry requires a universally agreed upon set of best practices and metrics.
MLCommons is focused on building these tools for the entire ML community. A cornerstone asset within MLCommons is MLPerf, the industry standard ML benchmark suite that measures full system performance for real applications. With MLPerf, MLCommons is promoting industry wide transparency and making like-for-like comparisons possible.
Public Datasets that Accelerate Innovation and Accessibility
Machine Learning and AI require high quality datasets, as they are foundational to the performance of new capabilities. To accelerate innovation in ML, MLCommons is committed to the creation of large-scale, high-quality public datasets that are shared and made accessible to all.
An early example of such an initiative for MLCommons is People’s Speech, the world's largest public speech-to-text dataset in multiple languages that will enable better speech-based assistance. MLCommons has collected more than 80,000 hours of speech with the goal of democratizing speech technology. With People’s Speech, MLCommons will create opportunities to extend the reach of advanced speech technologies to many more languages and help to offer the benefits of speech assistance to the entire world population rather than confining it to speakers of the most common languages.
MLCommons is an open engineering consortium with a mission to accelerate machine learning innovation, raise all boats and increase its positive impact on society. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding member partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.
The MLCommons founding members are from leading companies, including Advanced Micro Devices, Inc., Alibaba Co., Ltd., Arm Limited & Its Subsidiaries, Baidu Inc., Cerebras Systems, Centaur Technology, Inc., Cisco Systems, Inc., Ctuning Foundation, Dell Technologies, d-Matrix Corp., Facebook AI, Fujitsu Ltd, FuriosaAI, Inc., Gigabyte Technology Co., LTD., Google LLC, Grai Matter Labs, Graphcore Limited, Groq Inc., Hewlett Packard Enterprise, Horizon Robotics Inc., Inspur, Intel Corporation, Kalray, Landing AI, MediaTek, Microsoft, Myrtle.ai, Neuchips Corporation, Nettrix Information Industry Co., Ltd., Nvidia Corporation, Qualcomm Technologies, Inc., Red Hat, Inc., SambaNova Systems, Samsung Electronics Co., Ltd, Shanghai Enflame Technology Co., Ltd, Syntiant Corp., Tenstorrent Inc., VerifAI Inc., VMind Technologies, Inc., Xilinx, Gungdong Oppo Mobile Telecommunications Corp., Ltd (Zeku Technology (Shanghai) Corp. Ltd.) and researchers from the following institutions: Harvard University, Indiana University, Stanford University, University of California, Berkeley, University of Toronto, and University of York. Additional MLCommons membership at launch includes LSDTech.
|
OPCFW_CODE
|
Memory leaks in requests library
I noticed a very large increase in memory usage when retrieving a pdf file using requests library. The file itself is ~4MB large but physical memory allocated to python processes increases by more than 150MB !
Is anyone aware of the possible causes (and maybe fixes) of such behavior?
This is the test case:
import requests,gc
def dump_mem():
s = open("/proc/self/status").readlines()
for line in s:
if line.startswith("VmRSS"):
return line
Below is the output I got in intrepretter.
>>> gc.collect()
0
>>> dump_mem()
'VmRSS:\t 13772 kB\n'
>>> gc.collect()
0
>>> r = requests.get('http://www.ipd.uni-karlsruhe.de/~ovid/Seminare/DWSS05/Ausarbeitungen/Seminar-DWSS05')
>>> gc.collect()
5
>>> dump_mem()
'VmRSS:\t 20620 kB\n'
>>> r.headers['content-length']
'4089190'
>>> dump_mem()
'VmRSS:\t 20628 kB\n'
>>> gc.collect()
0
>>> c = r.content
>>> dump_mem()
'VmRSS:\t 20628 kB\n'
>>> gc.collect()
0
>>> t = r.text
>>> gc.collect()
8
>>> dump_mem()
'VmRSS:\t 182368 kB\n'
Obviously I shouldn't try to decode a pdf file as text. But what is the cause of such behavior anyway?
As you mention, it seems strange to read it as text. Did you try to use read instead of readlines, and str.index instead of str.startswith? If it doesn't exhibit the same behaviour, then this is not a problem on requests.
@goncalopp: the dump_mem function is only used to display memory usage, and is clearly not the cause of the memory increase.
When no charset parameter is included in the content type and the response is not a text/ mimetype, then a character detection library is used to determine the codec.
By using response.text you triggered this detection, loading the library, and it's modules include some sizable tables.
Depending on the exact version of requests you have installed, you'll find that sys.modules['requests.packages.chardet'] or sys.modules['requests.packages.charade'] is now present, together with around 36 of sub-modules, where it wasn't before you used r.text.
As the detection runs, a number of objects are created that use various statistical techniques on your PDF document, as detection fails to hit on any specific codec with enough certainty. To fit all this in memory, Python requests more memory to be allocated from your OS. Once the detection process is complete, that memory is freed again, but the OS does not then de-allocate that memory, not immediately. This is done to prevent wild memory churn as processes can easily request and free memory in cycles.
Note that you also added the result of r.text to your memory, bound to t. This is a Unicode text object, which in Python 2 takes up between 2 and 4 times as much memory as the bytestring object. The specific download you have there is nearly 4 MB as a bytestring, but if you are using a UCS-4 Python build, then the resulting Unicode value adds another 16MB just for the decoded value.
So is the spectacular memory increase due to the decoding tables in character detection libraries? If this would be true the same memory blow-up would happen when I do a r.text for simple html pages, isn't it?
@Eugen: The detectors are loaded on demand; by feeding in a large PDF file you simply temporarily ballooned the memory footprint. The OS doesn't immediately take all that memory back.
Also, the garbage collector isn't going to collect the content and text bindings that were created. I find it hard to believe chardet's tables are this large on their own. Keep in mind also that r.content is cached and so just doing del c won't be enough. Also if the pdf is large enough, t will not be gc'd.
|
STACK_EXCHANGE
|
It would seem to me that the mouse only moves the cursor, it is not in charge of ige visibility and shape of the cursor.
Unless your problem is the cursor jumping to some random part of the screen making it difficult to find. That might be a mouse problem.
Otherwise I do not see how the mouse would make your cursor disappear.
Before spending any money, I would suggest digging up any old generic USB mouse, or borrow from a friend, just to see if the cursor disappears for that mouse. If it does, then it is not the mouse, and you are looking at a software issue.
When it comes to pointing devices, i personally I prefer the Magic Trackpad along with BetterTouchTool (free app) to add more gestures to its operations. Much better than a 2 button scroll wheel mouse "In My Opinion". Your mileage may vary.
I agree with Bob. A cursor disappearing likely has nothing to do with the mouse.
The cursor changes shape quite often as you scroll across a page, pointer, i-beam, hyperlink finger, etc. There is likely something interfering with the redraw when it changes shape. I've used Lion and Mountain Lion on three different Macs, and my daughters both have used Lion and Mountain Lion, and neither report any problems with the cursor disappearing. But, the only "mouse" I use predominantly is a Magic Trackpad. I do have a wireless Microsoft Mouse and Keyboard attached to a Mini, but I don't use the mouse on that. I had periodically attached cheap Dell and Microsoft corded mice to the laptops. No issues on any of them.
That other long thread you participated in had a lot of Adobe references, and someone thought they had it down to Java. Adobe, like Microsoft, likes to roll their own interface instead of using the built-in frameworks. If you use a lot of Adobe apps, it could be a conflict with Adobe's and some other program. If it was purely Adobe, there'd be a lot more people with that problem than the 30 to 40 on that thread.
The other "thread" in that discussion is the fact that many of them are using Adobe products, likely for digital design. That begs the question, "what other software is installed that most would use to assist the digital designer." Even if you are not a digital designer, you might have stumbled across the same piece of software.
My mouse cursor does often jump to a random part of the screen. When running InDesign, the cursor moving off-screen causes the whole documant to scroll to a blank part of the Pasteboad, so I loose my cursor position, and sometimes an object that was selected at the same time disappears off-Pasteboard.
For normal typing, the text cursor stops moving as I continue typing, so I lose text if I am not looking at the screen at the time. When the cursor stops, the contents of the window (at least I think it is the contents of the window rather than the window itself) moves slightly to the right and back again, then when the cursor re-appears it moves slightly again. This happens in all softwares: InDesign, Word, Safari web pages and (I think) in TextEdit. About every 30 minutes or so, sometimes longer. When I intalled M. Lion it was by the erase the HD and install method.
The mouse problem is not correllated with the charge level of its battery.
I will try again to use my Kingston optical mouse (it seems to use a generic Apple software). (I lacks many features of course.)
I also use the Trackpad in parallel to the magic mouse and it never gives any trouble like the above, although when I barely touch it it sometimes clears the Desktop (as with the three fingered spread gesture), or displays all windows unstacked (as with the three fingered upward gesture), or it zooms the screen.
I did not replace my Java as that long thread suggested for fear of messing things up.
In addition to using InDesign I use Photoshop.
I will investigate the wireless Microsoft Mouse.
I don't have any dodgy software on my Mac nor plugins, except Default Folder and SnapZPro, both reputable programs, as a precaution against glitches.
I have just thought of an interesting feature. When I erased and installed M. Lion, then the Adobe and Microsoft software, the pointer behaved OK for a time after the same glitches in Lion, the misbehaviour increasing in frequency after about three months.
It is disappointing that the Apple Genius bar was unable to solve this long-standing problem, even after doing tests overnight. I have taught myself to grin and bear it, and to fall somewht out of love with the Mac.
PS You will notice thatthis message has white on it!
|
OPCFW_CODE
|
Copyright © 2002-2021 Judd Vinet, Aaron Griffin and CLI and GTK versions are available with the handbrake-cli and handbrake packages respectively. The mountpoint field is where you want to have it mounted. Arch Linux is a “bleeding edge” Linux operating system. The first one … Everything about the appearance can be customized by choosing different desktop environments. Partitioning. Arch Linux sudo pacman -S ifuse Fedora sudo dnf install ifuse OpenSUSE. CoolStar's theos toolchain for iOS. Unplug the iPod device and plug it back. to update your existing system. Linux ISO Image Downloads Linux ISO images are a very efficient way to download and install a distribution. Applications which use GVFS, such as some file managers (GNOME Files, Thunar) or media players (Rhythmbox) can interact with iOS devices after installing the gvfs-afc and gvfs-gphoto2 packages. You can run the image with the following command: In addition to the BitTorrent links above, install images can also be Simply copy MP3 files onto the iPod Shuffle (sub-folders are allowed too) and run: In order to use this version of the iPod Shuffle under linux, you can use the python based command line tool ipod-shuffle-4gAUR. Application documents are not included in the iPhone's media directory and are mounted separately. You may easily change the volume label for more expedient access using dosfslabel from the dosfstools package: where /dev/sdxx is the current device node of your iPod. Step 4: . This article or section needs language, wiki syntax or style improvements. See the USB storage devices article for detailed instructions. Note that the creation date metadata is not in the converted video, so you need to use a script like: And use cp -a or rsync -t in order to preserve the file's date & time. If you have a wired connection, you can boot the latest release directly over the network. It is still under development in early stages. You need to set up the iPod to make libgpod able to find its Firewire ID. Next do modprobe fuse to actually load the fuse module. Example command to encode for 5G iPod: or iPod Touch/iPhone compatible video output: By default, neither the iPhone nor the iPod Touch present mass storage capability over USB, though there is a solution for accessing your files. Arch is the ultimate distro for choice, so as you can expect, choosing how Arch Linux looks is really left up to you. After installing ifuse, for instance, you should see your iPhone appear in the left navigation of Gnome Files and other supporting file managers. You can move photos and videos out of
/DCIM/100APPLE, however you need to trigger a rebuild of the "Camera Roll" database by deleting the old databases. systemd comes with udev rule to automatically start and stop this daemon so no user interaction is required. … If udisks2 is installed, it will mount an attached iPod to /run/media/$USER/iPod_name. You may be looking for I would personally recommend you to try out a lightweight window manager like i3 . Vagrant images for libvirt and virtualbox are available on the Vagrant Cloud. And you are done! mp3nema: 0.4 Your iPod can now be managed with Amarok or gtkpod. Traditional iPods are accessed just like a normal USB storage device containing a vfat file system (in rare cases hfsplus), and can be accessed as such. Has extremely comprehensive configuration support, which will be able to spit out iPod-compatible video files. Installing XFCE on Archtakes a few commands … The best way to try it is to follow the instructions in the INSTALL file, which should be simple for most Arch … An example command to encode iPhone/iPod Touch-compatible video: Another encoder with comprehensive configuration support. Please Current Release: 2021.01.01; Included Kernel: 5.10.3; ISO Size: 690.3 MB directed at people who know how Arch works well enough to get everything going but don’t have the time to sit at a console entering commands to get everything perfect Since firmware version 2.0, Apple has obfuscated the music database. Assuming an already paired iPhone with address '00:00:DE:AD:BE:EF', simply create a profile in /etc/network.d called - for example - 'tether': . If the volume label of the iPod is long, or contains a mixture of spaces, and/or lower-case and capital letters, it may present an inconvenience. We are going to partition this 20 GB space into three partitions. If you do decide to take the CLI way, a good guide is available at http://trac.handbrake.fr/wiki/CLIGuide[dead link 2020-03-29 ⓘ]. This can be fixed by syncing once with iTunes in order to create it. Step 2: . For convenience, as Arch kernel is built with no support for that filesystem, you might want to restore your iPod using iTunes on Windows. File integrity checksums for the latest releases can be found below: If you want to become an Official Arch Linux Mirror please follow the Restarting the file manager or application might be needed. instructions listed here. Confusing ArcoLinux There’s a huge benefit to using a bleeding edge system like Arch. Install the libimobiledevice libraries and optionally ifuseAUR the mounting utility. If you are an existing Arch user, there is no need to download a new ISO Partition the hard disk. ArcoLinux is a distribution based on Arch Linux that was created with the aim of educating people on how to use Linux. It copies data, but as soon as USB is disconnected, everything is as before. ensure the download image matches the checksum from the md5sums.txt or Arch Linux is distributed free of charge on the official website. can always be updated with pacman -Syu. It can be generated using: By default libgpod does not seem to be able to syncronize on a iPod Nano 6th generation. Below you'll find links that lead directly to the download page of 25 popular Linux distributions. This can convert to mp4 files. mobiusft: 1.12: An open-source forensic framework written in Python/GTK that manages cases and case items, providing an abstract interface for developing extensions. Open Rufus and set all the options as in the image: You'll see an icon of a CD to the right of the line that says 'Create a bootable disk using...'. trademarks. Alternatively one can create this file using the site http://ihash.marcansoft.com/. Apple Color Emoji is a color typeface used by iOS and macOS to display emoji Step 3: . Enter the serial number of the iPod on the website. Arch Linux is a bleeding-edge operating system, so updates are sent to users as soon as possible. It offers several download options, including via http. It's based on the checkm8 bootrom exploit released by axi0mx. A web-seed capable client is recommended for fastest download speeds. It can use iFuse and with it, automatically import the pictures and videos automatically. download is finished, so you can seed it back to others. We complement our official package sets with a community-operated package repository that grows in … Click it and select the .iso image of Arch linux (or the distribution you want to install). the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. Alternatively, you can create a netcfg network profile to allow easy tethering from the command line, without requiring Blueman or Gnome. or be directly written to a USB stick using a utility like dd. Levente Polyák. These tools use their own implementation to communicate with the iPod: More information about iPhone/iPod Touch support, https://wiki.archlinux.org/index.php?title=IOS&oldid=632552, Pages or sections flagged with Template:Style, GNU Free Documentation License 1.3 or later. If you can spare the bytes, please leave the client open after your ARch One is an AR (augmented reality), VE (virtual environment) and VR (virtual reality) viewer for architectural 3D designs. If you are using recent firmware, the file /System/Library/Lockdown/Checkpoint.xml can be modified to enable use of the older, non-obfuscated database. Now, open a terminal and use the following command to install some necessary packages. The Arch Linux devs have rolled out their freshly baked Arch Linux 2017.02.01 ISO images. All that is required is sufficient drive space, software to write the ISO image and a bootable media such as CD/DVD or USB flash drive. usbmuxd is required to make connections to iOS devices all. You can now mount your device. It offers several download options, including via http. Get and confirm the current volume label: This page was last edited on 19 August 2020, at 10:07. To be able to use the iPod Nano with libgpod, a SysInfoExtended file is also needed to be placed in the directory /mnt/ipod/iPod_Control/Device/. dosemu. I desired such an app, so I built it and find it very useful. 3) Once you have that number, create or edit /mnt/ipod/iPod_Control/Device/SysInfo. Typeface family designed for coding, terminal use and technical documents. After this the easiest way to properly initialise a few things on the device's side is with the iPod convenience script.This is available in the AUR as ipod-convenience AUR [断开的链接:package not found]. It is also a functional distribution that you can install on your machine and use for daily work. checkra1n supports iOS 12.0 and newer.
|
OPCFW_CODE
|
get long message in Spyder when I used SymPy
Description
What steps will reproduce the problem?
Hello! I used SymPy 1.0 in the Spyder Console and now on every input I get this long message: Output from spyder call 'get_namespace_view':
C:\Users\rwden\AppData\Local\Programs\Spyder\pkgs\sympy\deprecated\class_registry.py:38: SymPyDeprecationWarning:
C, including its class ClassRegistry, has been deprecated since SymPy
1.0. It will be last supported in SymPy version 1.0. Use direct
imports from the defining module instead. See
https://github.com/sympy/sympy/issues/9371 for more info.
deprecated_since_version='1.0').warn(stacklevel=2)
How can I prevent this message from appearing every time I do an input? Thank you. I am new to Spyder and it is great!
If you can't wait for sympy 1.8 you can see if you can follow their instructions in your link
The same question. "How can I prevent this message from appearing every time I do an input? "
Hi @bob5280, @Dan-Patterson and @appomsk
Can you guys please open a Sympy console and see if the message persists? You can open one in the hamburger menu of the console > New special console > New Sympy console.
Please let me know and thanks for reporting!
Hi, @steff456
Yes. When I open Sympy console I see:
Python 3.9.2 -- IPython
Output from spyder call 'get_namespace_view':
C:\Users\yazu\AppData\Local\Programs\Python\Python39\lib\site-packages\sympy\deprecated\class_registry.py:33: SymPyDeprecationWarning:
C, including its class ClassRegistry, has been deprecated since SymPy
1.0. It will be last supported in SymPy version 1.0. Use direct
imports from the defining module instead. See
https://github.com/sympy/sympy/issues/9371 for more info.
SymPyDeprecationWarning(
And then this message appears after every any command.
My environment:
Windows 10
Python 3.9.2 (from python.org)
Sympy 1.7.1 (installed with pip)
Spyder 4.2.3 (installed with windows installer)
isympy in windows terminal works without any problem.
This is fixed in SymPy 1.8 which has been released some time ago. If you update to the latest SymPy with
pip install -U sympy
then you will not see this warning any more.
I'm not sure though given an issue like this what the best way is to deprecate something without warnings showing up from introspective tooling like in ipython.
@oscarbenjamin did you fixed your issue with the upgrade to SymPy 1.8?
I haven't personally noticed this issue (I don't use Spyder) but as a maintainer of SymPy I can tell you that this warning will not be seen any more. The ClassRegistry and all associated code including the code that emits the DeprecationWarning in the OP was fully removed from the SymPy codebase in https://github.com/sympy/sympy/pull/20896
|
GITHUB_ARCHIVE
|
The Brave Last Days of Windows XP
If you've been to a grocery store--and it's hard to imagine that you haven't--then you've seen the tabloid headlines. Some Hollywood star, usually a washed-up sap whose fame flickered out 20 years ago, sadly succumbs to some awful affliction, and the tabloids at the supermarket checkout counter chronicle his or her journey into that good night.
Almost inevitably, the tabs use the headline "Brave Last Days." Presumably, anybody who was ever even mildly famous is "brave" as the clock winds down. But we digress...and this is getting a little morbid. The reason we bring up the "brave last days" meme, though, is because we're reaching that point for an old, trusted friend: Windows XP.
Oh, sure, XP is alive and well on my netbook and on the PCs of the majority of computer users. It ain't dead yet, you might be thinking (presumably in a Texas accent, which was what I just used to enunciate that sentence). No, it ain't dead yet. But it is dying. Microsoft is killing it softly.
This week, Redmond magazine columnist Mary Jo Foley revealed that the Windows Live Wave 4 application suite will join IE9 in not supporting XP. There will be more stories like this, obviously, as Microsoft rolls out new product and initiatives.
And Microsoft has to do this. It's not just because of profits or market share, although those are obviously big factors in Windows XP's forthcoming demise. It's also because Windows XP is nearly a decade old, and it really won't be able to handle some of the products Microsoft is about to release. One Windows expert told RCPU in passing a few months ago that Windows XP is a child's toy compared to Windows 7, or something to that effect. Having now used Windows 7, we believe it.
Migration to Windows 7 is a matter of time at this point. OK, so Microsoft didn't provide an upgrade path from XP. That was a mistake. But plenty of third parties are stepping in to fill that breach now. Partners, your customers can either get a jump on everybody else by taking advantage of everything Microsoft offers with Windows 7, or they can stagnate with XP until the old OS is finally pretty much useless. The evidence behind that statement is just going to get stronger, no matter how courageously old XP faces extinction. These are XP's Brave Last Days; it's time to fondly remember the old OS and move on.
We've had some good e-mails about this, but we want more: What's your take on leaving XP and moving to Windows 7? Send your thoughts to [email protected].
Posted by Lee Pender on March 29, 2010
|
OPCFW_CODE
|
Intermediate System to Intermediate System (IS-IS, also written ISIS) is a routing protocol designed to move information efficiently within a computer network, a group of physically connected computers or similar devices. It accomplishes this by determining the best route for data through a packet switching network.
The IS-IS protocol is defined in ISO/IEC 10589:2002 as an international standard within the Open Systems Interconnection (OSI) reference design. The Internet Engineering Task Force (IETF) republished IS-IS in RFC 1142, but that RFC was later marked as historic by RFC 7142 because it republished a draft rather than a final version of the (International Organization for Standardization) ISO standard, causing confusion.
IS-IS has been called "the de facto standard for large service provider network backbones."
IS-IS is an interior gateway protocol, designed for use within an administrative domain or network. This is in contrast to exterior gateway protocols, primarily Border Gateway Protocol (BGP), which is used for routing between autonomous systems (RFC 1930).
IS-IS is a link-state routing protocol, operating by reliably flooding link state information throughout a network of routers. Each IS-IS router independently builds a database of the network's topology, aggregating the flooded network information. Like the OSPF protocol, IS-IS uses Dijkstra's algorithm for computing the best path through the network. Packets (datagrams) are then forwarded, based on the computed ideal path, through the network to the destination.
The IS-IS protocol was developed by a team of people working at Digital Equipment Corporation as part of DECnet Phase V. It was standardized by the ISO in 1992 as ISO 10589 for communication between network devices that are termed Intermediate Systems (as opposed to end systems or hosts) by the ISO. The purpose of IS-IS was to make possible the routing of datagrams using the ISO-developed OSI protocol stack called CLNS.
IS-IS was developed at roughly the same time that the Internet Engineering Task Force IETF was developing a similar protocol called OSPF. IS-IS was later extended to support routing of datagrams in the Internet Protocol (IP), the Network Layer protocol of the global Internet. This version of the IS-IS routing protocol was then called Integrated IS-IS (RFC 1195)
IS-IS adjacency can be either broadcast or point-to-point.
Both IS-IS and Open Shortest Path First (OSPF) are link-state protocols, and both use the same Dijkstra algorithm for computing the best path through the network. As a result, they are conceptually similar. Both support Classless Inter-Domain Routing, can use multicast to discover neighboring routers using hello packets, and can support authentication of routing updates.
OSPF was natively built to route IP and is itself a protocol that runs on top of IP, and OSPFv2 is only able to build IPv4 routing tables. IS-IS is an OSI Layer 3 protocol initially defined for routing CLNS. However, IS-IS is neutral regarding the type of network addresses for which it can route, and was easily extended to support IPv4 routing, using mechanisms described in RFC 1195, and later IPv6 as specified in RFC 5308. To operate with IPv6 networks, the OSPF protocol was rewritten in OSPF v3 (as specified in RFC 5340).
Both OSPF and IS-IS routers build a topological representation of the network. This map indicates the subnets which each IS-IS router can reach, and the lowest-cost (shortest) path to a subnet is used to forward traffic.
IS-IS differs from OSPF in the way that "areas" are defined and routed between. IS-IS routers are designated as being: Level 1 (intra-area); Level 2 (inter area); or Level 1–2 (both). Routing information is exchanged between Level 1 routers and other Level 1 routers of the same area, and Level 2 routers can only form relationships and exchange information with other Level 2 routers. Level 1–2 routers exchange information with both levels and are used to connect the inter area routers with the intra area routers.
In OSPF, areas are delineated on the interface such that an area border router (ABR) is actually in two or more areas at once, effectively creating the borders between areas inside the ABR, whereas in IS-IS area borders are in between routers, designated as Level 2 or Level 1–2. The result is that an IS-IS router is only ever a part of a single area.
IS-IS also does not require Area 0 (Area Zero) to be the backbone area through which all inter-area traffic must pass. The logical view is that OSPF creates something of a spider web or star topology of many areas all attached directly to Area Zero and IS-IS, by contrast, creates a logical topology of a backbone of Level 2 routers with branches of Level 1–2 and Level 1 routers forming the individual areas.
IS-IS also differs from OSPF in the methods by which it reliably floods topology and topology change information through the network. However, the basic concepts are similar.
OSPF has a larger set of extensions and optional features specified in the protocol standards. However, IS-IS is easier to expand: its use of TLV data allows engineers to implement support for new techniques without redesigning the protocol. For example, in order to support IPv6, the IS-IS protocol was extended to support a few additional TLVs, whereas OSPF required a new protocol draft (OSPFv3). In addition to that, IS-IS is less "chatty" and can scale to support larger networks. Given the same set of resources, IS-IS can support more routers in an area than OSPF. This has contributed to IS-IS as an ISP-scale protocol.
The TCP/IP implementation, known as "Integrated IS-IS" or "Dual IS-IS", is described in RFC 1195.
IS-IS is also used as the control plane for IEEE 802.1aq Shortest Path Bridging (SPB). SPB allows for shortest-path forwarding in an Ethernet mesh network context utilizing multiple equal cost paths. This permits SPB to support large Layer 2 topologies, with fast convergence, and improved use of the mesh topology. Combined with this is single point provisioning for logical connectivity membership. IS-IS is therefore augmented with a small number of TLVs and sub-TLVs, and supports two Ethernet encapsulating data paths, 802.1ad Provider Bridges and 802.1ah Provider Backbone Bridges. SPB requires no state machine or other substantive changes to IS-IS, and simply requires a new Network Layer Protocol Identifier (NLPID) and set of TLVs. This extension to IS-IS is defined in the IETF proposed standard RFC 6329.
|
OPCFW_CODE
|
Node.js is used by more than 1.3% of the websites we know about, i.e., about 20 million websites. The main reason for its popularity among developers is its ability to handle multiple requests at the same time instead of creating a long queue. Node.js hit the milestone of 1 billion downloads in 2018 – in just nine years since its launch.
A Node.js development company uses Node.js to create tools and apps to be used on browsers and applications.
How to Choose The Best Node.js Framework
Node.js frameworks are of three types: MVC, Full-stack MVC, and REST API:
An MVC framework works on three parameters: Model, View, and Controller. It handles complex projects very efficiently. A full-stack MVC framework is usually used when building a real-time app because it has a larger library and range for development.
REST (Representational State Transfer) API is generally used when a Node.js application development needs to be swift.
Node.js frameworks are chosen by developers because of these two parameters:
Node.js is an intuitive framework that provides opinions. However, choosing an opinionated framework may mean getting directed in the wrong direction. This can affect the scalability of the project.
The framework's functionality includes parameters like cluster organization, management, and batch support. Different frameworks have different types of security locks, and this can be one of the deciding factors for the framework you choose.
Best Node.js Frameworks
There are a plethora of different tools developed in Node.js. The best framework for you may depend upon your application. Given below is our personally curated list of the top 10 Node.js frameworks in terms of their usage and feasibility.
Express is a minimalistic framework with straightforward coding architecture. It does not require any additional learning and can be used when developing apps with fast loading speed.
The framework includes many intelligent HTTP helpers, which can make the program reusable. It is user-friendly and does not require highly specialized knowledge. It is one of the most popular frameworks used during Node.js application development by both development teams and individual developers.
The framework allows for simplified communication between the client and the server by streamlining the request placed by the client. It is used by companies like Twitter, Uber, and Accenture.
Sails are the ideal framework if the developer wants to build a high-end customized application with specific codes. Sails are based on Express.js and are a lightweight framework that is used to use API created by another development team.
The sail has reusable security policies and a code generator. This allows developers to reduce the time spent on writing code and focus on other features. The framework can also work with multiple databases simultaneously, which further decreases the coding time required by the developers.
Two of the most popular companies which use Sail are Lithium and Greendeck. They are preferably used when developers want to build a customized application for enterprises. It may not be as flexible as Express but is one of the most commonly used Node.js frameworks.
Hapi is known for its minimality and scalability. It provides fast bug-fixing and has a wide range of built-in plugins that help developers without worrying about middleware. Hapi is a commercial framework, and therefore, it is usually used for proxy applications.
Node.js application development using this framework allows flexible and scalable apps. It also includes default cache and authentication protocols that can be included in the application. Hapi is usually used when developers want to develop an exceedingly secure and scalable application.
It is generally used to develop social media applications due to its security standards.
The framework allows developers to swiftly build web apps that can smoothly transfer the data between clients and servers. It also allows developers to launch upgrades on all applications simultaneously without affecting the user experience.
Meteor is compatible with all types of devices and systems, including iOS and Android for mobiles and most web applications. The application development for all these can be done in a single language using Meteor. Meteor is most used when developers want to build a highly efficient application that is swift, modern, and will remain the same across all platforms.
Adonis reduces development time and provides out-of-the-box support for web sockets. The built-in modules can be used for data validation, authentication, etc. This streamlines the experience for users and increases user satisfaction.
The framework is known as the most dependable one. This framework is mostly used when a developer wants to swiftly develop an application with minimal errors. It is also used when developers want to create an application with a JSON response.
Adonis allows full-stack development of web applications with high security.
Total.js is a framework that provides maximum flexibility to developers. It is available in various versions and can easily be used to track the application accurately. Total is one of the few frameworks that provides developers with a CRM-like experience.
The various versions of Total like HelpDesk and CSM can be used to integrate more features into your applications. For example, Total allows developers to integrate IoT applications. The application is used when developers need to track the real-time function of the application.
Total is compatible with multiple databases and has a low maintenance cost.
LoopBack is best known for creating REST APIs. It is more flexible and includes a built-in client API explorer. This framework is generally used during full-stack development. Applications using this framework can be flexible enough to be compatible with different devices.
LoopBack is usually chosen by novice developers because it allows them to maintain detailed documentation. It has an extensive in-built module that can be used for applications like recording data, creating emails, uploading documents, registering users, etc.
The drawback of LoopBack is that it is a very opinionated framework that limits creativity for developers. However, the advantage of this is that the developer can build the application fast and still ensure that it is well-structured.
Koa is a customizable framework that is a lightweight version of Express. It has more options for customization's that can help build Node.js web application development. This framework is used when the performance of the web application is important.
Koa can efficiently deal with HTTP middle ware, and this makes application development using Koa easy. The team that developed Koa is the same that developed Express, and the main aim while creating this was to create a framework that was more minimal than the existing Express.
Koa also allows websites to handle different types of content at the same time using the same URL. For example, providing a personalized experience to all visitors of the website or translating the page. It provides the flexibility of Express and minimizes the chances of coding errors.
Keystone is generally used in content-heavy projects. Online editorial websites, chat forums, social media applications, e-commerce platforms are some of the examples of the applications of Keystone. It has an intuitive AI and a real-time framework that can be used to manage and track updates.
The dynamic tool can easily be used to keep track of all the data and publish updates according to requirements. The coding on Keystone is simple and allows the developer to easily manage their templates, data fields, and forms.
Keystone can also be used for creating and using email templates. The authentication features and session management options allow higher security for all data, including emails, forms, etc.
Nest has extensive libraries that can easily be used to create an enterprise-level application. The framework can also be used to boost the productivity of the bank end server because it sticks to clean-code architecture.
Nest.js is written using TypeScript and uses progressive JavaScrip. This allows developers to complete object-oriented programming and function-reactive programming. Node.js application development using Nest allows the developer to save time while they're coding.
Nest allows developers to do clean coding, which can later be scaled according to the requirement of the application. This also allows developers to manage data without worry about security or losing the flow.
Choosing the right Node.js framework can make a substantial difference in the final web application. Different frameworks have different advantages that a Node.js development company can use to their advantage. Scrutinizing the project requirement, size, and available resources is the key to choosing the right Node.js framework for any project.
|
OPCFW_CODE
|
Today we are privileged to have our first guest Tech Tuesday author, Janeece Moreland, Managing Consultant/Business Process Change Agent for Conexus SG, sharing her insights into Sales Order imports with GL Distributions.
To learn more about Janeece and Conexus, please take a look at their bios at the end of the article.
The purpose of this example is to provide the technical user with the details of how to create a map with a SQL Select statement and group data appropriately on the detail map functions.
The original map provides Distribution lines, EFT imports, Post Import SQL tasks and Rolling columns. We will be exploring only one portion of the map in this article.
Problem Statement: A user has provided you with an excel spreadsheet with Customer, item description, line amount, GL account numbers and distribution references to be imported as Invoices.
Issue: When you have one source line of data you will need an Accounts Receivable offset account and group the data for 1 invoice, multiple line items and multiple GL distributions.
Example: Our example today will create a Sales Invoice, Line Items, and distribution lines with distribution references. The complication in the data is the distribution lines normally sum based on the account number on the item but we are supplying the account number instead. In addition to the GL distribution, we want to provide the distribution reference which will be shown in the financial detail when the Sales Transaction is posted in the General Ledger. The Customer Number will be used to group the data on the Create Sales Transaction.
First, we will look at the ODBC connection.
The data is provided in an Excel file format but we want to control the fields that will be used. The following selections were made:
Data Source: ODBC Connection.
Connection Type: Custom Connection
Key Fields: CustNbr
Select the Drive+Path+Name of File
Our next step will be to select the data from the Excel spreadsheet.
This will be entered in the data source. An explanation of each field and its use is listed below:
The Statement accomplishes the following functions:
- TrxType – It establishes the type of transaction. The assumption is that all these transactions are invoices. Additional select statement functions can be added to create Returns if needed.
- Customer Number – used for the SOP Header and the Payment
- Invoice Date – Used for the SOP Header
- Item Number – used for the SOP lines
- Quantity – used for the SOP Line
- Item Desc – used for SOP Line
- Unit Price – used for the SOP Line
- Document Amount – used for the SOP Header, Payment and Cash Distribution
- EFT Date – used on the Payment map
- Distribution Reference – this could have been concatenated from the Item and Item Desc but the user wanted to have the flexibility to use it for additional maps. If the user types over the GP field length of 30 characters, the select will truncate the text.
- GL Account – used on the Distribution map
- DistType – used on the Distribution map
- Unit Price – used for sop line and distribution map
- Credit and Debit Amounts are used on the distribution map
- The UNION statement creates another line item that will be used for the distribution.
SELECT 3 AS TrxType, CustNbr,[Invoice Date],[Item Number],[Item Desc],[Quantity], [Unit Price] ,[DocumentAmount],[EFT Date],LEFT([Distribution Reference],30) AS DRef,
[GL ACCOUNT NUMBER] AS AccountString, 1 AS DistType,
[Unit Price] AS CreditAmount, 0 AS DebitAmount
WHERE [DocumentAmount]> 0
SELECT 3 AS TrxType,
CustNbr,[Invoice Date],[Item Number],[Item Desc],[Quantity], [Unit Price] ,[DocumentAmount],[EFT Date],’EFT’ AS DRef,
‘10700-5101-00000-00’ AS AccountString,3 AS DistType,
0 AS CreditAmount,[DocumentAmount] AS DebitAmount
WHERE [DocumentAmount]> 0
Grouping Data imports the data into the GP Sales Invoice by Customer Number based on the default set on the high level map.
MAP: Create Sales Transaction
Notice that the Group data is checked but the CustNbr is not checked. The default is the Key field from the Data Source.
MAP: Add line item
Customer Number and Item Description are selected in this map. The item description was selected since the item number does not exist in the Inventory. The items are non-inventory items. The default is the Key field from the Data Source.
About the Author:
Janeece began her consulting career in 1987 working as the IT manager for the largest independent software distributor at the time in the US, Software Spectrum. She then worked in Sales at International Business Machines (IBM). From IBM she then moved to Platinum Software Corporation, now known as Epicor Software. At Platinum Software, Ms. Moreland served as a member of the SWAT consulting team covering the United States working with difficult installations as a problem solver. She joined Oracle in 1996 as a part of sales and consulting group focused on the Utilities Industry then moved to back into as an Oracle Sales Manager. Now a managing partner in the Microsoft Business Applications consulting world she manages varying projects but focuses on integrations from varying operational systems into Accounting applications and Manufacturing implementations. Her broad Applications software experience includes: Financial Accounting, Financial Analysis, Project Accounting (Costing), Distribution and Manufacturing Planning systems.
To contact Janeece, please email firstname.lastname@example.org.
About Conexus SG:
Conexus SG specializes in financial systems and process consulting. A significant portion of their business revolves around the implementation, support, enhancement and upgrade of Microsoft ERP (accounting) software such as Dynamics GP (formerly Great Plains), Dynamics AX and Dynamics SL (formerly Solomon). To learn more about Conexus SG, visit their website: www.conexussg.com.
|
OPCFW_CODE
|
[Libreoffice] gcc/g++ compilation issue in desktop/splash
michael.meeks at novell.com
Wed Aug 24 10:06:44 PDT 2011
Sorry for the delayed reply ...
On Mon, 2011-08-22 at 12:08 -0400, Kevin Hunter wrote:
> I haven't seen a fix go by, and have seen nothing mentioned on the list
> regarding building the splash part of desktop, but I'm having an issue
> that appears to be solved by switching to g++ instead of gcc:
Hmm - I don't really understand that I must confess.
> Making: oosplash
> ccache /usr/local/bin/gcc -Wl,-z,noexecstack -Wl,-z,combreloc
> undefined reference to
> collect2: ld returned 1 exit status
> dmake: Error code 1, while making '../../unxlngx6/bin/oosplash'
So - I have:
/opt/icecream/bin/gcc -Wl,-z,noexecstack -Wl,-z,combreloc -Wl,-z,defs
-Wl,-rpath-link,../../unxlngi6.pro/lib:/data/opt/libreoffice/core/solver/350/unxlngi6.pro/lib:/lib:/usr/lib -L../../unxlngi6.pro/lib -L../lib -L/data/opt/libreoffice/core/solenv/unxlngi6/lib -L/data/opt/libreoffice/core/solver/350/unxlngi6.pro/lib -L/data/opt/libreoffice/core/solenv/unxlngi6/lib ../../unxlngi6.pro/obj/splashx.o
-lpthread -Wl,--as-needed -lXext -lX11 -Wl,--no-as-needed -luno_sal
-lpng14 -lXinerama -o ../../unxlngi6.pro/bin/oosplash
Which works fine here at least. And I have no list_node_base related
symbol at all exported from libuno_sal.so - odd.
> By executing that line manually and switching to /usr/local/bin/g++ the
> compile is successful. And that point I can restart the build and LO
> finishes with a successful build.
Which is indeed odd.
> It looks like the source of those files is C, but the libuno_sal is a
> .cpp file. I'm not clear on the linking rules bewteen the C and CPP,
> but given that no one else is having this issue, is there something else
> that I'm missing?
So - libuno_sal is a C++ library, certainly - but surely we should be
able to link it without any magic.
I wonder what changed there ? libuno_sal.so - clearly does have a
number of C++ exports it requires (objdump -T shows):
00000000 DF *UND* 00000000 GLIBCXX_3.4 _ZSt20__throw_length_errorPKc
But then again, it links to libstdc++
$ ldd ../sal/unxlngi6.pro/lib/libuno_sal.so
linux-gate.so.1 => (0xffffe000)
libdl.so.2 => /lib/libdl.so.2 (0xb7762000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb7747000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7658000)
libm.so.6 => /lib/libm.so.6 (0xb762e000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb760f000)
libc.so.6 => /lib/libc.so.6 (0xb74a2000)
$ objdump -T /usr/lib/libstdc++.so.6 | grep _ZSt20__throw_length_errorPKc
00055890 g DF .text 000000d5 GLIBCXX_3.4 _ZSt20__throw_length_errorPKc
At least for me ...
Can you do some more investigation of which symbol is missing from
where ? of course, failing that we can do some horror of a rename in
there, or perhaps poking at removing things like:
APP1CODETYPE = C
might help - but ... ideally we want as little junk in the splash app
as humanly possible; it'd be most ideal not to link sal at all IMHO
but ... ;-) its more work to avoid it.
michael.meeks at novell.com <><, Pseudo Engineer, itinerant idiot
More information about the LibreOffice
|
OPCFW_CODE
|
Are motorcycle helmets effective in preventing Corona?
Can a full-head helmet with the visor closed prevent getting infected by the Corona virus during a casual interaction with a carrier?
For example, when buying something at a store or discussing something with someone for 10-15 minutes.
How does it compare with the protection provided by a standard face mask (that can be purchased at a pharmacy)?
I don't recommend that you walk into a bank wearing a motorcycle helmet with the face shield down. ;-)
@CareyGregory good tip for life :) (pun intended)
“Feed a virus, starve a bacterial infection!” https://www.sciencedaily.com/releases/2016/09/160908130545.htm
I think it's a great idea. It protects your eyes from respiratory droplets and keeps you from touching your face. I don't know how well it would work from keeping droplet from reaching your mouth, but probably pretty well, especially if you used it with an N95 respirator. It might be a little shocking/intimidating to people, so I would only use it in extremely infected areas.
Thanks! I actually tried it. I ride a motorcycle anyway, so I just kept it on as I handled my business. So far I'm not infected (not that it proves anything). I did get a few weird looks and one pharmacist (!!) actually laughed at me when I kept the helmet on inside the (crowded!) pharmacy. But I was never one to care too much about public scorn. Or SE scorn, for that matter :)
Just say in a very deep voice "I find your lack of faith disturbing".
It helps to have a Darth Vader voice modulator as well.
I disagree with the “probably pretty well” portion of this answer. A motorcycle helmet is not sealed around the head, and the air you inhale while wearing one is not filtered at all. The protection you’d get is comparable to wearing a plastic face shield — full-face splash protection from large respiratory droplets and face touching but nothing for inhalation. This is superior to a basic surgical mask (because it prevents self-touching and ocular infection) but not at all comparable to an N95 respirator for inhaled droplets, which is very important when near someone infected with COVID-19.
(Though it seems plausible that using a properly fit-checked N95 respirator and a motorcycle helmet concurrently would work, so long as the helmet padding does not disturb the fit of the inner mask. This would be an issue if the helmet padding makes contact with the chin — I believe they normally do, but I’ve never worn one.)
It's better to use some guidelines from reputable sources rather than coming up with your own.
The WHO does not recommend masks for healthy people in most circumstances, only when directly dealing with someone infected, and in that case accompanied by careful hand washing.
Their advice for the general public as of this posting (and this is good advice year-round anyways, not only when particular pathogens are present, because influenza and viruses that cause the common cold are always present) is to wash your hands, keep a distance from people who are sneezing/coughing, avoid touching your face, and cough or sneeze into a tissue or corner of the elbow rather than into the open or your hands.
Thank for the references and information. I will read more about it. But this doesn't really answer my question...
@obe The only way to answer your question directly would be to design a study involving a coronavirus and motorcycle helmets. No one is going to do that. Can viruses travel through solid plastic? No, but that isn't necessarily relevant.
@obe Here is a link to what the CDC says about it: https://www.cdc.gov/coronavirus/2019-ncov/hcp/respirator-use-faq.html
@Sedumjoy thanks, I'll read.
@Bryan - you wrote: "The only way to answer your question would be to design a study involving a coronavirus and motorcycle helmets". This claim doesn't make sense. Has a study been done involving a coronavirus and hand washing? Has a study been done involving a coronavirus and masks used by healthy people? It's all very new and yet there are recommendations, which are presumably based on knowledge of other, similar viruses, and/or understanding of various underlying mechanisms. Maybe no one has published or considered helmets, but the question itself is as valid as asking about washing hands.
@obe Coronavirus is not new, just this particular strain. And yes, studies have been done with tools considered part of PPE for infectious disease. They do not typically do studies of infectious disease prevention for tools considered PPE for cycling.
Now to address the whole debate about using respirators and masks. You absolutely want them. There has been a narrative throughout the western world that you don't need them, and they don't work. This is not the case. The CDC and WHO tell ordinary people to not wear respirators, but both organizations have political agendas and have frankly done a terrible job handling this public health crises. The truth is that there is a lack of respirators to go around when every single person should be wearing them. In Taiwan, their number one way to combat COVID19 is by having every single person in their population wear masks. Thus far, Tiawan has been one of the best countries in handling the COVID19 health crises.
Now to address why N95/N99/N100/NIOSH respirators work. One of the leading pathways for transmitting this disease is through inhalation of respiratory droplets from other infected individuals. An immediate barrier that has been proven to be effective is having a mask/respirator to intercept these droplets before they reach your mucus membranes. The coronavirus is between 60 to 140 nm in diameter. Let's use the N95 mask for example. This mask captures all particulates at 300 nm at a rate of up to 95% or better. 300 nm is the particulate size that has the lowest capture efficiency. Anything smaller, including the coronavirus, will have an even higher capture efficiency. This virus is one of the most contagious pathogens we have had in recent human history. Any form of personal protection equipment is absolutely vital in keeping yourself from becoming infected. I would also recommend that when you are in a public areas you should wear eye protection, gloves and over coat that you leave outside your house. I do really think the motorcycle helmet is a very creative and potentially effective way to keep yourself protected.
Tiawan has been one of the best countries in handling the COVID19 health crises Would you please correct the typo? Thanks. (I already upvote it)
|
STACK_EXCHANGE
|
Interested in Joining the Lab?
The Martin lab is an interdisciplinary research group. We encourage students and postdocs with experimental or computational backgrounds to inquire about our lab. Contact Adam Martin with your CV and research interests if you are interested.
We welcome trainees of any race, nationality, biological sex, gender identity, sexual orientation, religion, parental status, physical ability, and age. For how we practice this in the lab, see lab policies.
Hannah Yevick (2015-2022) – Assistant Professor of Physics, Brandeis University
Jasmin Imran Alsous (2018-2021) – Research Scientist, Developmental Dynamics, CCB, Flatiron Institute
Yujie Li (2014-2015) – Software Development Engineer at Audible, Inc.
Soline Chanet (2012-2017) – Permanent Researcher, CRCN, CNRS, Paris, France
Jeanne Jodoin (2014-2017) – Patent Specialist at Nixon Peabody
Frank Mason (2011-2016) – Research Assistant Professor at Vanderbilt University
Anna Yeh (2017-2023) – Works at Abcam
Jaclyn Camuglia (2017-2022) – Works at Loxo Oncology
Marlis Denk-Lobnig (2016-2021) – Postdoc at University of Michigan at Ann Arbor (Kevin Wood’s lab)
Clint Ko (2015-2020) – Postdoc at Rockefeller University (Shyer lab)
Natalie Heer (2012-2018) – Senior Data Scientist at C.H. Robinson
Jonathan Coravos (2012-2017) – Private Investor at Viking Global Investor
Shicong ‘Mimi’ Xie (2012-2016) – Postdoc at Stanford University (Skotheim Lab)
Claudia Vasquez (2011-2015) – Postdoc at Stanford University (Dunn Lab)
Jennifer Nwako (2019-2020) – Graduate Student at University of North Carolina at Chapell Hill
Vardges Tserunyan (2018-2019) – Graduate student at University of Southern California (Computational Biology and Bioinformatics)
John Solitro (2017-2018) – Research Associate at Tessera Therapeutics
Mike Tworoger (2011-2017) – Laboratory Operations Coordinator at Moffitt Cancer Center and Research Institute in Tampa, FL
Elena Kingston (2014-2015) – Postdoc at CalTech
Selam Daniel Brook (2022-2023) – Undergraduate at MIT
Isias Workeneh (2022) – Undergraduate at MIT
Uzuki Horo (2021-2023) – Graduate student at Institut Pasteur, Paris, France
Prateek Kalakuntla (2019-2021) – Graduate student at Stanford University
Apolonia Gardner (2019) – Graduate student at Harvard University (Chemistry)
Virapat Kieuvongngam (2012-2013)- Graduate student at Rockefeller University (Biology)
Fernando Melara Barahona (Summer 2022) – Graduate student at Vanderbilt University
Chidera Okeke (Summer 2021) – Graduate student at MIT (Biology)
Babli Adhikar (Summer 2019) – Graduate student at University of Michigan at Ann Arbor
Eeshit Vaishnav (Summer 2014) – Graduate student at MIT (Biology)
Anjaney Kothari (Summer 2013) – Graduate student at Virginia Tech (Biomedical Engineering)
|
OPCFW_CODE
|
Machine Learning is a big buzzword in today’s world. Surprisingly, Machine Learning has been around for a long time without your knowledge. Have you ever wondered why YouTube recommends the following video to you? It examines what videos you are watching, what channel the videos are from, how long the videos are, and what topic the videos are about. So, before recommending the following video, YouTube considers all these factors. In short, YouTube “learns” from your viewing habits and suggests similar videos based on that. This is how Machine Learning works; you’ve seen examples for years.
As you are probably aware, Data science encompasses a wide range of domains, one of which is Machine Learning. Data Science comprises several fields and techniques, such as statistics and artificial intelligence, used to analyze data and derive meaningful insights.
Simply put, we contribute to Machine Learning through our daily internet interactions. You see Machine Learning in action every time you search for a coffee maker on Amazon, “top tips to lose weight” on Google, or “friends” on Facebook.
Machine Learning technology enables Google, Amazon, and Facebook search engines to provide relevant recommendations to users.
With the help of ML technology, these companies can monitor your daily activities, search behavior, and shopping preferences.
Machine Learning Is Another Essential Component Of Artificial Intelligence.
Importance Of Machine Learning
The field of machine learning is constantly evolving. With evolution comes an increase in demand and importance. One critical reason why data scientists require machine learning is to make “high-value predictions that can guide better decisions and smart actions in real-time without human intervention.”
Machine learning is gaining popularity and recognition as a technology that helps analyze large amounts of data, easing the tasks of data scientists in an automated process. Machine learning has transformed data extraction and interpretation by incorporating automatic sets of generic methods that have replaced traditional statistical techniques.
Who Is a Data Scientist
Before delving into the significance of Machine Learning for Data Scientists, it’s worth noting who Data Scientists are. We’ll also go over how to become a Data Scientist.
Data Scientists extract meaningful information from massive amounts of data. They identify patterns and assist in developing tools such as AI-powered chatbots, CRMs, and so on to automate specific processes in a company.
Data Scientists perform in-depth statistical analysis using a solid understanding of various Machine Learning techniques and modern technologies such as Python, SAS, R, and SQL/NoSQL databases.
The role of a Data Scientist may sound similar to that of a Data Analyst, but they are not the same.
The Role Of Machine Learning
Machine Learning and Artificial Intelligence have dominated the industry, completely overshadowing all other aspects of Data Science such as Data Analytics, ETL, and Business Intelligence.
The Data Science Lifecycle is where Machine Learning Algorithms are used. Machine Learning automatically analyses large amounts of data. Machine Learning automates the process of data analysis and makes data-informed predictions in real-time without the need for human intelligence. A Data Model is automatically generated and trained to make real-time predictions.
The typical Machine Learning flow begins with you feeding the data to be analyzed, followed by you defining the specific features of your Model and building a Data Model accordingly. The Data Model is then trained using the initial training dataset. Once the Model has been Trained, the Machine Learning Algorithm is ready to make a prediction the next time a new dataset is uploaded.
Let’s look at an example to understand this better. You’ve probably heard of Google Lens, an app that lets you take a picture of someone with good fashion sense and then helps you find similar clothes.
So the app’s first step is recognizing the product it is looking at. Is it a suit, a jacket, or a dress? The features of various products are defined; for example, the app is told that a dress has shoulder straps, no zippers, arm holes on each side of the neck, and so on. As a result, the characteristics of a dress are defined. The app can now create a Dress Model based on the specified features.
When a picture is uploaded, the app examines all the existing Models to determine what it is looking at. The app then uses the Machine Learning Algorithm to make a prediction and displays you with similar models.
Organizations are increasingly recognizing the value of data in improving their products and services. The main goal of this article was to explain how Data Science and Machine Learning complement each other, with machine learning making a Data Scientist’s life easier.
Data science and machine learning collaborate to provide valuable data insights in real-world scenarios, such as online recommendation engines, speech recognition (in Siri and Google Assistant), and detecting fraud in all online transactions. As a result, it is not incorrect to conclude that Machine Learning can analyze data and extract valuable insights.
As a result, machine learning will soon become one of the most in-demand technologies. It will be one of the most productive applications in the future.
We at Onpassive Digital are work towards making Data Analytics and Big Data available to all the businesses and help them in achieving their maximum reach and realizing goals.
|
OPCFW_CODE
|
☑ Gatsby v3
☑ How Gatsby handles Images & Background Images
☑ How to upload and deploy a Gatsby Website
Learn how to handle Images the Gatsby v3 Way
Image processing is what Gatsby does best. But they changed the way images are handled—in a good way—in Gatsby v3.
In the past, using Gatsby Image would take 5 to 6 steps in order to fully take advance of. Now it's two, maybe 3 if you want to get super creative. The process is streamlined, making it sooooo much easier to create fast-loading sites with multiple image formats—including the webP format.
Turn Color images into Grayscale—in GraphQL
Look ma! No photoshop needed! Leverage the power of GraphQL and learn how to turn images into grayscale without destroying one pixel.
Not happy with the look? Give it a duotone with CSS. We'll learn how to do just that in the course, making your images flawless & fast loading.
Look Ma! No GraphQL for images!
Say what? Yep! If you dreaded setting up GraphQL all to load up an image in Gatsby Image, you're in luck. Gatsby StaticImage is a dream. One line and Gatsby outputs multiple versions of the image, including the webP format.
Responsive Website Goodness!
We'll leverage the power of React Bootstrap (that has bootstrap under the hood) to make our website 100% responsive. Oh heck yes.
Social Media Icons—with React Font Awesome
Font Awesome was already awesome. But then they became more awesome with React Font Awesome. Bring in SVG, fast-loading social media icons via React Font Awesome.
In this course, I'll walk you through how to install and load up any icon in their catalogue.
Let's get this site online too!
You've built it. Now let's launch it. I'll also walk you through how to upload your work to Github and then how to deploy the website via Netlify. From a blank canvas (ok, almost blank canvas) to a fully built one-page website that can be seen around the world, this course has it all.
Setting up Gatsby v3
Building an About Landing Page with Gatsby v3
Source Files & Example Images
Installing hello world
Installing gatsby plugins
Installing 3rd party react plugins
The Free Links to all of the plugins
Bringing in css and images
Let the importing begin
Enter the GraphQL World
Setting up the background image
Full-screen background image with color overlay
Bring in Gatsby Helmet
The grid - Container Rows Cols
Aligning content to the middle of the page with Bootstrap flex
Lets bring in some type and stylize it
It's React Font Awesome Time
And now it's Tooltip Time
Gatsby and Static Images
Fixing the spacing
Git Hub and Netlify
Adding the favicon
|
OPCFW_CODE
|
Senator pushes for stricter regulations on pet pythons after child strangled to death
U.S. Sen. Bill Nelson, D-Fla., lobbied Congress Wednesday to consider a federal ban on pet pythons.
Nelson testified before the Senate Environment and Public Works Subcommittee on Water and Wildlife to discuss a piece of legislation he introduced in February to reclassify pythons as an “injurious animal” and to end the importation of the snakes between state lines.
“It’s time for the federal government to step up and address this ecological crisis,” he said in a prepared statement. “We must change the law and we must do it quickly.”
Last week’s tragic death of a 2-year-old girl living in Sumter County reignited a debate on the dangers of pythons. According to reports, the girl was bitten and constricted by an 8-foot-long albino Burmese python after it escaped from its terrarium.
Paramedics who responded to the home found the child had died from asphyxiation. The snake later escaped from the house and officers from the Sumter County Sheriff’s Office found it alive days later.
Besides the potential risk of owning a snake for families with children, the non-native python species has been dominating the ecological landscape of the Florida Everglades.
“The crown jewel of our national park system has been transformed into a hunting ground for these predators,” said Nelson in his testimony. “It’s just a matter of time before one of these snakes gets to a visitor.”
Python owners have been illegally releasing their snakes into the Everglades for decades, leading to an infestation of an estimated 100,000 pythons.
According to Florida’s Fish and Wildlife Commission, the Burmese python is capable of growing as large as 26 feet long. Not native to Florida, it threatens other species.
Scott Hardin, an exotic species coordinator for the wildlife commission, said there are a number of theories around the infestation of pythons in the marsh lands of South Florida.
One speculation is that Hurricane Andrew destroyed large breeding facilities resulting in loose snakes, while another is that exotic pet dealers attempted to establish their own populations for financial gain.
“We really don’t know,” said Hardin. “We have found over the years many individual pythons here and there. It’s not a new phenomenon — the only new thing is that they are reproducing.”
He said large Burmese pythons may pose a risk to humans, but there is not a great likelihood of an encounter.
“Humans in Florida are more likely to encounter alligators than Burmese pythons,” Hardin said.
Officials are concerned about the large snakes preying on endangered species, although some of the snakes are turning their back on the smaller prey.
One 2005 media report highlighted the case of a brazen 13-foot Burmese python that attempted to eat a 6-foot alligator.
Hardin said there have been other cases of snakes attacking alligators, but officials are not worried about the trend, especially because in a fight between the two species, alligators have typically come out victorious.
Florida residents are currently allowed to possess a Burmese python, but they have to acquire a $100 annual license for any “Reptile of Concern.” While applying for the license, they have to demonstrate their knowledge of the species.
Every registered snake is implanted with a micro-chip to assist wildlife officials in tracking escaped snakes.
Cape Coral snake handler Stan Delano is selling some of his ball pythons. Unlike the Burmese python that can grow to over 20 feet, the African-native ball pythons do not grow any larger than 5 feet.
“What happens is people get them and don’t realize they get 20 to 24 feet, and they dump them because they can’t feed them,” he said.
Owners of Burmese pythons can spend hundreds of dollars each month purchasing rabbits and other food for the snake and, as a result, inexperienced snake handlers find that caring for such a large snake is daunting.
“I don’t think regular people should have that big of a snake anyhow,” said Delano.
State law also requires the snake’s container to be locked at all times to prevent escape. In the case of the child attacked outside of Orlando last week, the cage had no lock.
According to the Humane Society, at least 17 people have been attacked by pythons and seven of those died.
Since the early 1990s the city of Cape Coral has been victim to another invasive species, the Nile monitor.
The monitor eats a number of species — fish, birds, mollusks and turtles — but is threatening to the burrowing owl population for consuming owl eggs out of the nests.
|
OPCFW_CODE
|
I've tried every possible fields but can not find the number of times functions are called.
Besides, I don't get
# Self. What do these two numbers mean?
There are several other ways to accomplish this. One is obviously to create a static hit counter and an NSLog that emits and increments a counter. This is intrusive though and I found a way to do this with lldb.
Now, instead of stopping, llvm will emit info about the breakpoint including the number of times it has been passed.
As for the discussion between Glenn and Mike on the previous answer, I'll describe a performance problem where function execution count was useful: I had a particular action in my app where performance degraded considerably with each execution of the action. The Instruments time profiler showed that each time the action was executed, a particular function was taking twice as long as the time before until quickly the app would hang if the action was performed repeatedly. With the count, I was able to determine that with each execution, the function was called twice as many times as it was during the previous execution. It was then pretty easy to look for the reason, which turned out to be that someone was re-registering for a notification in NotificationCenter on each event execution. This had the effect of doubling the number of response handler calls on each execution and thus doubling the "cost" of the function each time. Knowing that it was doubling because it was called twice as many times and not because the performance was just getting worse caused me to look at the calling sequence rather than for reasons the function itself could be degrading over time.
While it's interesting, knowing the number of times called doesn't have anything to do with how much time is spent in them. Which is what Time Profiler is all about. In fact, since it does sampling, it cannot answer how many times.
It seems you cannot use Time Profiler for counting function calls. This question seems to address potential methods for counting.
W/ respect to self and #self:
From the way the numbers look though, it seems
So this wouldn't tell you how many times a method was called. But it would give you an idea how much time is spent in a method or lower in the call tree.
NOTE: I too am unsure about the various 'self' meanings though. Would love to see someone answer this authoritatively. Arrived here searching for that...
IF your objective is to find out what you need to fix to make the program as fast as possible,
Number of calls and self time may be interesting but are irrelevant.
Look at my answer to this question, in particular points 6 and 8.
EDIT: To clarify the point further, suppose the following is the timeline of execution of the program. Some of that time (in this case about 50%) is spent in an activity that can be removed, if you know what it is, such as needless buried I/O, excessive calls to
|
OPCFW_CODE
|
Unity 3.x Game Development by Example: Beginner’s Guide
Beginner game developers are wonderfully optimistic, passionate, and ambitious. But that ambition is often dangerous! Too often, budding indie developers and hobbyists bite off more than they can chew. Some of the most popular games in recent memory – Doodle Jump, Paper Toss, and Canabalt, to name a few – have been fun, simple games that have delighted players and delivered big profits to their creators. This is the perfect climate for new game developers to succeed by creating simple games with Unity.
This book starts you off on the right foot, emphasizing small, simple game ideas and playable projects that you can actually finish. The complexity of the games increases gradually as we progress through the chapters. The chosen examples help you learn a wide variety of game development techniques. With this understanding of Unity and bite-sized bits of programming, you can make your own mark in the game industry by finishing fun, simple games.
Unity 3.x Game Development by Example shows you how to build crucial game elements that you can reuse and re-skin in many different games, using the phenomenal (and free!) Unity 3D game engine. It initiates you into indie game culture by teaching you how to make your own small, simple games using Unity3D and some gentle, easy-to-understand code. It will help you turn a rudimentary keep-up game into a madcap race through hospital hallways to rush a still-beating heart to the transplant ward, program a complete 2D game using Unity’s User Interface controls, put a dramatic love story spin on a simple catch game, and turn that around into a classic space shooter with spectacular explosions and “pew” sounds! By the time you’re finished, you’ll have learned to develop a number of important pieces to create your own games that focus in on that small, singular piece of joy that makes games fun.
What you will learn from this book :
- Find out how people are using the amazing new Unity game engine
- Develop and customize four fun game projects, including a frantic race through hospital hallways with a still-beating human heart and a catch game with a jilted lover that morphs into a space shooter!
- Create both 2D and 3D games using free software and supplied artwork
- Add motion, gravity, collisions, and animation to your game objects using Unity’s built-in systems
- Learn how to use code to control your game objects
- Create particle systems like shattering glass, sparks, and explosions
- Add sound effects to make your games more exciting
- Create static and animated backdrops using multiple cameras
- Build crucial elements you’ll use again and again, like timers, status bars, title screens, win/lose conditions, and buttons to link game screens together
- Deploy your games to the Web to share them with friends, family, and adoring fans
- Discover the difference between game skins and mechanics, to earn more money from your games
The book takes a clear, step-by-step approach to building small, simple game projects. It focuses on short, attainable goals so that the reader can finish something, instead of trying to create a complex RPG or open-world game that never sees the light of day. This book encourages readers hungry for knowledge. It does not go into gory detail about how every little knob and dial functions – that’s what the software manual is for! Rather, this book is the fastest path from zero to finished game using the Unity game engine.
Who this book is written for
If you’ve ever wanted to develop games, but have never felt “smart” enough to deal with complex programming, this book is for you. It’s also a great kick-start for developers coming from other tools like Flash, Unreal Engine, and Game Maker Pro.
- Paperback: 408 pages
- Publisher: Packt Publishing (September 2011)
- Language: English
- ISBN-10: 1849691843
- ISBN-13: 978-1849691840
|
OPCFW_CODE
|
I had a friend who recently took on a post as the PR manager for a company. One of his task was to revamp the website. He wasn't a web designer or knew anything about it, and, unfortunately, had to work with the agency that was previously hired by the ex-PR manager.
The problems started when the agency couldn't deliver some of the basic stuff on time. Faced with a tight deadline, he asked me if I could give him a crash course on Wordpress and help out with some Wordpress design for him over the weekend. He was generous with the offer.
I had no issue with helping him. During the weekend, I got this technophobic PR friend to get his hands dirty with Wordpress and learning to drag and drop the menus, change some text, By the end of the weekend, he was able to differentiate between a Page and a Post and a Theme.
However, he still had one problem. When he printed the website (yes, the boss was still old school and wanted to see printed versions of the website), the URLs would be printed next to the hyperlinked word or words.
He asked the agency for help and the agency said this will take 30 days man days to solve the issue, probably wanting to take advantage of my PR friend's "ignorance" to charge more hours for work to be done.
My PR friend showed me the issue and I instinctively guessed it could be some coding issue with the theme, even though I don't do Wordpress coding.
Give me a Wordpress theme and plugins and I could be learnt how to use it in less than an hour. Coding is like a foreign language to me.
Back to the problem of the URL being printed out. As they say, Google has all the answers (and your history), I did a quick Google search and found that there were others who also had the same issue with the themes.
In the developers blog, the simple solution was to remove a line or two of code to solve the issue. I emailed the solution to my PR friend and he forwarded it to the agency.
Viola! The issue was solved in less than an hour.
What pissed me off was how the agency tried to take advantage of my friend's ignorance to charge him extra. I don't think this is ethical. If you have a solution at hand and simple to solve, by all means, charge an hour or two, but for a month charge is ridiculous.
This is now how you build a relationship with your customers and that relationship could sour instantly if the customer finds out the truth.
Social Media Agencies also find me a pain to work with because I understand how it works and I detest some of the trickery to get quick results.
If you are in marketing or an entrepreneur who is looking at social and digital marketing, I suggest you do in depth study about it. Or maybe, get your hands dirty and try the tools out yourself.
|
OPCFW_CODE
|
In the world of cryptocurrency, Bitcoin has taken the lead as the most popular and valuable digital currency. But what makes it truly revolutionary is the technology behind it - the blockchain. This powerful technology has transformed how Bitcoin transactions are made, paving the way for new possibilities and innovations in cryptocurrency.
With the help of various APIs, developers can now create customized applications that can interact with different cryptocurrency networks, including Bitcoin, enabling faster, more secure, and more efficient transactions.
In this article, we'll explore how these Bitcoin APIs are changing the game for cryptocurrency enthusiasts and investors and what the future holds for this groundbreaking technology. So sit back, relax, and dive deep into the world of APIs.
The Need for Bitcoin APIs
As the popularity of Bitcoin and other cryptocurrencies has grown, so has the need for a more efficient and secure way to conduct transactions. This is where cryptocurrency APIs come in.
These APIs enable developers to create customized applications that interact with various cryptocurrency networks, such as Bitcoin.
This makes it easier for businesses and individuals to send and receive payments, track transactions, and manage their digital wallets.
With Bitcoin APIs, developers can create their own infrastructure for interacting with the Bitcoin network, which could be time-consuming and costly.
Benefits of Using Bitcoin APIs
There are several benefits to using Bitcoin APIs. First and foremost, they enable faster and more efficient transactions. With these APIs, payments can be instantly processed without intermediaries such as banks or payment processors. This reduces transaction fees and speeds up the payment process, making it more convenient for businesses and consumers.
Another benefit of APIs is that they provide enhanced security. These APIs use encryption techniques to ensure that transactions are secure and cannot be tampered with. This reduces the risk of fraud and hacking, a major concern in the cryptocurrency industry.
Bitcoin APIs provide greater transparency and accountability. Since all transactions are recorded on the blockchain, they can be easily tracked and verified. This makes it easier to identify fraudulent activity and ensure that transactions are conducted fairly and transparently.
Understanding Bitcoin API Integration
Bitcoin API integration involves connecting a custom application to various cryptocurrency networks using their respective APIs. This process can be complex and requires a good understanding of programming languages like Python, Ruby, and Java.
Several APIs are available for developers to use, each with its own features and capabilities. Some of the most popular APIs include Blockchain.info, Coinbase, and BitPay. Developers can choose the API that best suits their needs based on security, functionality, and ease of use.
Once the API is integrated into the application, developers can create customized functions interacting with the chosen cryptocurrency network - in this case, Bitcoin. These functions include sending and receiving payments, checking account balances, and tracking transactions.
Types of Bitcoin APIs
There are several types of APIs, each with its own unique features and capabilities. The most common types of Bitcoin APIs include:
- Payment APIs enable businesses to accept cryptocurrencies through their websites or mobile applications. Some popular payment APIs include BitPay and Coinbase.
- Blockchain APIs provide developers access to blockchain data, enabling them to track transactions and analyze data. Some popular blockchain APIs include Blockchain.info and BlockCypher.
- Wallet APIs enable developers to create and manage cryptocurrency wallets, which can be used for sending and receiving payments. Some popular wallet APIs include Blockchain.info and Coinbase.
- Exchange APIs enable developers to create custom trading platforms that can be used for buying and selling cryptocurrencies. Some popular exchange APIs include Bitstamp, Kraken, and Binance.
Best Bitcoin APIs:
This is a list of the best Bitcoin APIs that developers, traders, and crypto entrepreneurs can use to build apps that need crypto data.
Token Metrics API: Token Metrics Crypto API offers a complete data solution, delivering real-time and historical market information for cryptocurrencies. The API is designed to help developers and businesses quickly access and analyze the data they need to make informed decisions. It works as a robust data provider with over 14 tested, actionable data endpoints that can empower traders, bots, and platforms. Its high level of accuracy and reliability of data eliminates the need for guesswork. It provides data on exchange rates for various cryptocurrencies. The API can retrieve information for several leading cryptocurrencies, such as Bitcoin, Dogecoin, Litecoin, Ethereum, Binance Coin, and Bitcoin Cash.
Coinbase API: Coinbase is one of the most popular and reliable cryptocurrency exchanges in the world. The Coinbase API allows developers to create applications that interact with Coinbase's trading platform, enabling users to buy, sell, and store cryptocurrencies securely.
Bitfinex API: Bitfinex is another prominent cryptocurrency exchange that offers an API for developers to build trading bots, order management systems, and other applications. The Bitfinex API provides access to real-time market data, order book information, and other exchange features.
Binance API: Binance is a leading cryptocurrency exchange that provides an API for developers to build trading applications, payment gateways, and other cryptocurrency-related services. The Binance API offers access to real-time market data, trading pairs, order book information, and other exchange features.
Kraken API: Kraken is a popular cryptocurrency exchange that offers an API for developers to build trading bots, automated trading systems, and other applications. The Kraken API provides access to real-time market data, order book information, and other exchange features.
BlockCypher API: BlockCypher is a blockchain infrastructure provider that offers an API for developers to build blockchain-related applications. The BlockCypher API supports multiple cryptocurrencies and provides access to blockchain data, such as transaction information, block information, and other features.
CoinMarketCap API: CoinMarketCap is a leading cryptocurrency market data provider that offers an API for developers to build applications that use market data, such as price, market capitalization, trading volume, and other information.
Chainlink API: Chainlink is a decentralized oracle network that provides an API for developers to build smart contracts that can access off-chain data, such as real-world events, market data, and other information. The Chainlink API provides a secure and reliable way to access off-chain data for smart contracts.
The Future of BTC APIs
The future of BTC APIs looks bright, with new innovations and use cases emerging every day. As the popularity of cryptocurrencies continues to grow, we can expect to see more businesses and individuals adopting cryptocurrency APIs for their payment and transaction needs. The increasing demand for seamless integration of cryptocurrencies into various applications and platforms will drive the development of more advanced and user-friendly APIs.
One potential area of growth for cryptocurrency APIs is in the field of decentralized finance (DeFi). DeFi refers to financial applications operating on a blockchain designed to be transparent and decentralized. Cryptocurrency APIs could be instrumental in enabling DeFi applications such as decentralized exchanges, lending platforms, and insurance products.
Another area of growth for cryptocurrency APIs is in the field of micropayments. Cryptocurrency APIs enable instant and low-cost transactions, making them an ideal solution for micropayments involving small amounts of money. This could open up new possibilities for online content creators, publishers, and even IoT (Internet of Things) devices that require microtransactions.
Furthermore, as blockchain technology becomes more widely adopted, cryptocurrency APIs can be used to integrate digital assets into various sectors, including supply chain management, real estate, and gaming. The use of APIs will simplify the process of integrating blockchain and cryptocurrencies into existing systems, making it more accessible for businesses and developers.
As regulatory frameworks around cryptocurrencies continue to evolve, the importance of secure and compliant APIs will also grow. Cryptocurrency APIs will need to adapt to the changing regulatory landscape and ensure that they provide secure and compliant solutions for businesses and individuals.
In conclusion, the future of cryptocurrency APIs is promising, with new innovations and use cases emerging regularly. As the adoption of cryptocurrencies and blockchain technology continues to grow, the importance of APIs will only increase. Developers can expect to see even more innovative features, endpoints, and functionalities in the years to come, further simplifying the integration of cryptocurrencies into various applications and platforms.
|
OPCFW_CODE
|
Supported tablet not detected
This is most commonly the result of manufacturer driver leftovers.
Uninstall any tablet drivers that are currently installed.
Deleting files is not a proper method to uninstall drivers, it will cause issues. Use the Settings app or Control Panel instead.
- Use the WinUSBCleanup script to automatically clean up any leftovers that uninstallers failed to clean up.
Teleporting cursor position
This occurs because another tablet driver is sending input. To resolve this, uninstall all other tablet drivers on your computer, then replug your tablet.
Input does not work in osu!
Map raw input to osu! window in the ingame settings.
OpenTabletDriver has since v0.4.0 supported Raw Input as it uses the Windows SendInput API to position the cursor.
Therefore, it is not necessary to disable raw input itself.
Windows Ink pressure support
OpenTabletDriver has many plugins that implement different features. WindowsInk is one of these plugins, as given in the plugins name, it allows for use of the Windows Ink pressure api. To use it, follow below.
- Install VMulti Driver (This is NOT VMultiMode)
- Follow the instructions from the WindowsInk wiki
- Make sure that the application you are trying drawing in is set to WindowsInk/Windows 8+ Mode and the brush you are using has pressure support!
Note: Recently, a change in Windows made it so Windows Ink's and normal Mouse's cursor position is handled separately. This makes it so your cursor will appear to "jump" when switching from tablet to mouse while using Windows Ink output modes. This is not a bug of OpenTabletDriver but rather one of Windows.
Stuck at connecting to Daemon
This is usually caused by one of two things. Either you didn't follow the installation guide correctly or the settings file is corrupted. To remedy this case follow this below.
- Press Win + R and type in
- Delete or move the
settings.jsonfile from inside this folder.
- If this persists after also remove the
Open at startup
Starting OpenTabletDriver at login can be performed quite easily. This can be done for nearly any program, although there may be better ways to do it for those programs.
- Right click
OpenTabletDriver.UX.Wpf.exeand create a shortcut.
- Press Win + R and type in
- Move the shortcut into the folder that opened.
Starting OpenTabletDriver minimised can also be done by changing the properties of a shortcut. To do this, right click the OpenTabletDriver shortcut, go to properties > run > minimised.
OpenTabletDriver gives an Administrator warning
When Starting OpenTabletDriver you may get an error warning you about Administrator Permissions. OpenTabletDriver shouldn't be run as Administrator. This causes the plugin manager, FAQ page and Tablet debugger to not work along with other features.
- Press the Windows key and search
User Account Control.
- Move the slider to one of the top 2 options.
- Click OK, then restart your computer.
If this does not solve the issue, check that you are not using the "Administrator" named account as it overrides User Account Control.
Blank window or crashing when opening OpenTabletDriver
This can be caused by RivaTuner Statistics Server attempting to hook onto OpenTabletDriver, preventing the UX from responding due to failed hooks. If you use this application, make sure it doesn't hook onto OpenTabletDriver.
- Open RivaTuner Statistics Server.
- Click on Add button found in the bottom left corner.
- Locate OpenTabletDriver.UX.Wpf.exe on your computer then select it. Refer here for more information.
- Click on Application Detection Level then select "None".
Using VMulti to play Valorant
Vanguard (Valorants anticheat) requires a workaround to play Valorant with a tablet. As it requires Kernel level input you can use VMulti to mitigate some of the restrictions that Vanguard uses to stop tablet use. However, there will still be some restrcitions due to you not playing with a physical mouse.
- Install VMulti Driver
- Install and setup the VMultiMode plugin.
- Run Valorant by tapping with your tablet (Important due to Valorant only using the first "mouse" input)
- Do not move your mouse while valorant is still loading (this will make valorant use the mouse over the tablet.)
As VMulti is considered a separate input source from an actual physical mouse, Vanguard imposes several limitations to it.
- You won't be able to press left click while pressing a non-modifier keyboard key (for ex. WASD) and shift, we will call these keys from here onward as protected keys.
- Every time you press a protected key, the next two left clicks will always be dropped.
- If left click is held first and then a protected key is pressed, the left click will sometimes be dropped.
There is no fix this, as even Logitech G Hub, Razer Synapse, HawkuTD, DevocubTD, and even XP-Pen, Huion, Gaomon, Veikk drivers are affected without exceptions. As long as the input source is not a real physical mouse, Vanguard will impose such quirks intentionally.
|
OPCFW_CODE
|
The purpose of this message is to provide you with information to guide how you describe the Intent to Mentor mentoring activities. The activities that you enter for mentoring are extremely important and must be clear, concise and detailed. Mentoring is the ‘action’ completed as tied to the mentoring activity that will result in improving teaching and student learning. WHAT was the mentoring activity? Be specific and provide a detailed description of the activity. You should relate the description of the mentoring activity to the Illinois State Standards and/or your NBPTS Standards. If you are providing feedback to a candidate, please describe the nature of the comments/feedback.
PLEASE NOTE that mentoring descriptions should be different/individualized for each session. Do not give the exact same description for each mentoring session you submit.
Mentoring is not planning, preparation, developing, editing, copying, emailing, etc. A list of samples of language to avoid is included in the table below. Take time to review this list.
Remember, hours logged do not include work that you are already being paid for or work within your contracted school hours—in other words, NO double dipping. Hours logged for time already being paid for will be rejected.
You will be contacted if your mentoring submission does not explain your mentoring activity or is not relevant to mentoring. You will be requested to resubmit the mentoring hours using the correct language which will be inserted over your original submission.
Unfortunately, we can no longer approve mentoring hours unless the explanation of your mentoring activity is descriptive, detailed and focused on the mentoring activity as it relates to the standards and the goal of the activity.
Call the director of the NBRC directly if you have questions or require guidance on writing your mentoring activity description. We are here to support you and the work you do (309-438-1833).
Examples of language to AVOID when describing mentoring activities
Due to the individual nature of the mentoring activities provided by NBCTs,
|Helped candidates.||I am a facilitator for NBCT candidates.||I met and reviewed topics with other NBCTs.|
|Prepared for upcoming candidate support session. Made copies, organized materials, secured meeting location, etc.||Planning for a professional development presentation at my school.||Presented cohort session with the 9 candidates. Lead the sessions, discussions and assisted in writing.|
|Read for a candidate.||Participated in a webinar.||Completed sessions with candidates.|
|Read and responded to component.||Prepped (scanned paper, set up google,||Discussion and answering questions.|
|Helped a teacher with technical issues during lunch.||Emailed a candidate about missing meeting, what was missed, what to do for next time.||The meeting with the candidate went well. We talked about our students.|
|Read and commented on components, videos, draft, and writing about planning.||Technology - developing and editing lessons.||This provides me 3 mentoring hours for all my sessions I have completed so far in working with NBCT candidates.|
|The candidate felt worried.||I am a facilitator for a candidate cohort.||Today we split the cohort. Some worked on Component 4 under my guidance and others were introduced to Component 2. We went through directions, asked questions for clarity and discussed a game plan.|
|Talked to new teachers during break.||Sent a reminder for candidates to send me drafts by deadline.||Provide guidance and support based on the candidate's needs.|
|Conference call.||For two hours, I prepped Session One.||Component 4 Session 1 District Cohort.|
|Debrief after candidate meeting and plan for next meeting.||Emailed candidates.||Orientation to renewal process and requirements.|
|Session with candidates meeting after school.||Presented information during a staff meeting and shared what I learned at a conference.||Feedback for components.|
|Met with candidates.||Mentoring.|
|
OPCFW_CODE
|
- How can I become a web application developer?
- Is WhatsApp a web application?
- What is the best platform to develop web applications?
- Who earns more Web Developer or software developer?
- What is better software developer or web development?
- How is a web application built?
- Is YouTube a web application?
- Is Amazon a website or web application?
- Is Instagram a web app?
- What is a web developer salary?
- What do web developers get paid?
- What job does a web developer do?
- What is Web application in simple words?
- Can I video call on WhatsApp web?
- Is using WhatsApp web safe?
- Is Google a website or web application?
- Is Netflix a web app?
- What is Java web application?
- Is HTML a web application?
- What software do most web developers use?
- How long does it take to build a web app?
- Which language is best for web development?
- Are web developers in demand?
- Do web developers make apps?
Web application development is the process of creating software that is hosted on distant servers and delivered to a user’s device through the Internet. A web application (web app) is accessible through the internet rather than being downloaded.
Similarly, What is meant by web application development?
The process of planning, constructing, testing, and delivering a web-based software is known as web application development. When a company wants to have an online presence, they might build a bespoke web application. Web apps are interactive pages that operate on a web server and allow for user interaction.
Also, it is asked, What is an example of a web application?
A web application example Google Apps and Microsoft 365 are two popular apps. Gmail, Google Docs, Google Sheets, Google Slides, online storage, and more are all included with Google Apps for Work. Online document and calendar sharing are among the other features.
Secondly, What is the difference between web and application development?
A website provides information, but a web application demands the end user to provide input. A website with a small retail component, for example, might be considered a basic informative website.
Also, What do you need to develop a web application?
Web Application Development in 7 Steps Identify your issue. Workflow should be planned. Make a Web App Prototype. Make Your Prototype Valid. Make Your App. Check out your app. Your Web Apps will be hosted and launched.
People also ask, Is Facebook a web application?
Web applications, often known as web apps, are an important aspect of how the internet operates. Popular online programs include Facebook, Gmail (or any other popular email service), and even Udacity’s classroom.
Related Questions and Answers
How can I become a web application developer?
Five stages to become a Web Developer: Learn the foundations of web programming. Select a development specialty. Learn important web development programming languages. Develop your Web Developer abilities by working on projects. Create a portfolio for web development.
Is WhatsApp a web application?
WhatsApp Web is a web-based application that allows you to use your WhatsApp account on your computer. It enables you to use your computer instead of your phone to talk with individuals on WhatsApp. WhatsApp Web is a web-based version of the WhatsApp smartphone app.
What is the best platform to develop web applications?
Web Developers: 12 Best Web Development Software WordPress is the most widely used platform for creating websites. Mockplus is an online prototyping tool that does everything. Macaw is the greatest web design program for folks who know how to code. Weebly – The most user-friendly website builder for both beginners and professionals.
Who earns more Web Developer or software developer?
Because of their considerable knowledge and skill set, software engineers often receive a higher pay. Web developers, on the other hand, have a huge market and are often compensated by project, thus they may make more than a software developer depending on their workload.
What is better software developer or web development?
Gaming and file-handling apps designed using software function better. Data centralization and multi-user applications created on the web function better. 2. The most significant distinction between software development and web development is the interface.
How is a web application built?
Is YouTube a web application?
As Progressive Web Apps, YouTube Music and Google News were already available. YouTube may now be downloaded as a Progressive Web App (PWA) for a more customized experience.
Is Amazon a website or web application?
Dynamic content is seen on complex websites like online commerce and media portals. Google, Amazon, and Netflix, for example. Web applications are websites whose content may be modified by visitors using a browser.
Is Instagram a web app?
Instagram as it once was Most Instagram users presumably only use the online version of the site, although it is also available on mobile devices. Simply open your browser and go to Instagram.com, sign in, and begin scrolling.
What is a web developer salary?
Web Developer / Median Pay: 64,970 USD (2015) (annual)
What do web developers get paid?
What Does a Web Developer Get Paid? In 2020, the median income for web developers was $77,200. That year, the top 25 percent earned $107,620, while the bottom 25 percent earned $55,390.
What job does a web developer do?
The work of a web developer is to construct websites. Many web developers are also responsible for the website’s speed and capacity, in addition to ensuring that it is aesthetically attractive and simple to browse.
What is Web application in simple words?
A Web application (Web app) is a software that is stored on a distant server and distributed through the Internet using a browser interface. Web services, by definition, are Web applications, and many, but not all, websites feature Web apps.
Can I video call on WhatsApp web?
If you have WhatsApp Desktop installed on your computer, you can make free voice and video calls to your contacts. Windows 10 64-bit version 1903 and later allow desktop calling. macOS 10.13 and later are supported.
Is using WhatsApp web safe?
When you visit web.whasapp.com, it verifies that the code has not been tampered with in any way and validates that it is safe to use. Because of this added layer of security, WhatsApp has the best degree of security of any end-to-end encrypted messaging service (when using the web app).
Is Google a website or web application?
Websites with functionality and interactive aspects are known as web apps. Gmail, Facebook, YouTube, Twitter, and other online programs are all dynamic and designed to keep users engaged.
Is Netflix a web app?
Anywhere, at any time. Sign in with your Netflix account to view movies and TV shows on the web at netflix.com or on any internet-connected device that has the Netflix app, such as smart TVs, smartphones, tablets, streaming media players, and game consoles.
What is Java web application?
A Java web application is made up of both dynamic and static resources (such as Servlets, JavaServer Pages, Java classes, and jars) (HTML pages and pictures). A WAR (Web ARchive) file may be used to deploy a Java web application.
Is HTML a web application?
What software do most web developers use?
How long does it take to build a web app?
Building a front-end application with backend infrastructure takes an average of 4.5 months. However, if the scope is rather large, it may take a few more months. If the team has some ready-made components, however, the project may be completed and modified in 3.5 months.
Which language is best for web development?
Are web developers in demand?
Is there a big need for web developers? Yes. Web development employment are expected to rise by 8% between 2019 and 2029, twice the national average for all professions, according to the BLS.
Do web developers make apps?
To create websites or online apps, the web developer may utilize any software the firm prefers. A special software must be used by the mobile app developer. They will normally utilize Android Studio to create an app that has to be listed on Android.
This Video Should Help:
Web application development is a term that describes the process of creating software applications using web technologies. It includes front-end and back-end development, as well as server-side programming. The salary for web application developers varies depending on the experience level and what kind of company they are working for. Reference: web application development salary.
- web application examples
- web application development pdf
- web application development tutorial
- web application development course
- web application development frameworks
|
OPCFW_CODE
|
Can I submit just two great letters of recommendation if the application calls for three?
I'm getting ready to apply for a graduate program in biophysics at a few different universities. I'd just like to ask a question regarding the quantity and quality of letters of recommendations.
I'll first give some background about my current application. I have a 3.0 cumulative GPA (there is a valid reason for a few low grades early on in my career which I won't get into here, but my GPA has increased dramatically since then), my physics GRE score is a little low and my general is above the 55th percentile in all three fields.
I have two strong letters of recommendation. One from the director of my lab, and one from the head of experimental physics at my university. My question is regarding a third letter. Most applications state that 3 letters are required. I have two options for the third letter at this point; the first being from my advisor, who I've never taken a class with but knows me well, and the other is from a professor who said he'd write me one but it wouldn't be very strong as he can only discuss my character and, to quote him, "general interest in science".
I'm afraid that a third letter would hurt my application, regardless if two of the letters are strong. How damaging would two letters be in applying to a physics graduate program?
Thank you.
What do you mean by "advisor"? Is this a mentor for a research program? Or an appointed academic advisor?
Sorry, I should have been more clear. An appointed academic advisor.
I haven't sat on a grad admissions committee, but for many other applications, the story is that there are tons of applicants, and people look for reasons to not study an application carefully. Not including the required number of letters would be an example of an easy excuse to ignore an application.
When someone says they can only discuss your character and strong interest in science, that sounds like it's not going to be a great letter. If your advisor can write you what he says is a strong letter (and possibly give some context to your GPA), that might be valuable even if he hasn't had you in a class.
I voted to close this as “primarily opinion based” because I misclicked. I meant to close this as “off-topic: seeking personal advice”
If the application calls for three letters of recommendation and you only submit two, your application may be considered incomplete and may not be considered at all.
So if you don't have a third letter, you're going to have a problem. But if the third letter is weak, it will hurt your application, too, which means you're stuck between a rock and a hard place.
|
STACK_EXCHANGE
|
In the vast realm of information technology, Unique Universal Identifiers, or UUIDs, play a pivotal role in ensuring data integrity and uniqueness. One such UUID, d3e295e6-70c8-411d-ae28-a5596c3dbf11, holds its own significance. This article aims to unravel the importance and applications of this specific UUID, shedding light on its role in various technological domains.
UUIDs are strings of characters designed to uniquely identify information in a universally unique manner. The format of a UUID is standardized, typically represented by a 32-character hexadecimal string, separated by hyphens into five groups (8-4-4-4-12). d3e295e6-70c8-411d-ae28-a5596c3dbf11 adheres to this structure, embodying its uniqueness.
Uniqueness and Data Integrity:
The primary purpose of UUIDs is to ensure the uniqueness of identifiers across distributed systems. In scenarios where multiple entities might generate identifiers independently, the probability of collision (two entities generating the same identifier) is minimized, if not eliminated. d3e295e6-70c8-411d-ae28-a5596c3dbf11 serves as a robust and distinctive identifier, contributing to enhanced data integrity.
Application in Databases:
UUIDs find widespread use in databases, where they serve as primary keys for records. Unlike incremental integers, which might pose challenges in distributed systems, UUIDs offer a decentralized solution. d3e295e6-70c8-411d-ae28-a5596c3dbf11 can be employed to uniquely identify records, facilitating efficient data retrieval and management.
Integration in Software Development:
In software development, UUIDs are integral for various purposes, including session management, transaction tracking, and entity identification. The UUID d3e295e6-70c8-411d-ae28-a5596c3dbf11 can be seamlessly integrated into applications to create unique markers for different entities, ensuring a standardized and reliable approach to identification.
UUIDs contribute to enhanced security by making it challenging for malicious actors to predict or manipulate identifiers. The sheer size of the UUID space makes it computationally infeasible to guess or brute force valid identifiers. Therefore, d3e295e6-70c8-411d-ae28-a5596c3dbf11 adds an additional layer of security to systems leveraging it.
The standardized format of UUIDs ensures cross-platform compatibility. Whether used in web applications, mobile platforms, or backend systems, d3e295e6-70c8-411d-ae28-a5596c3dbf11 can seamlessly integrate into diverse technological ecosystems, providing a consistent and reliable identification mechanism.
UUIDs, exemplified by the distinctive d3e295e6-70c8-411d-ae28-a5596c3dbf11, are indispensable tools in the realm of information technology. Their role in ensuring uniqueness, data integrity, and security makes them a cornerstone in various applications, ranging from databases to software development. As technology continues to advance, the significance of UUIDs like d3e295e6-70c8-411d-ae28-a5596c3dbf11 will persist, underlining their importance in the digital landscape.
|
OPCFW_CODE
|
Virtualization, docker, automated tests and more!
This summer I am interning with the QE team at Redhat for Pulp Project. To do the work of re-creating bugs and writing tests, I will need to install various versions and builds of Pulp. To enable me to do this repeatedly and without endangering my local system or generating a difficult to duplicate state of the application, this week I was trained on the various tools the Pulp QE team use to automate their work flow. This included setting up virtual machines on my local system using libvirt, as well as scripting the cloning of these VM’s and using Ansible to install various builds of pulp on each VM. Additionally I got introduced to beaker, an internal tool for provisioning machines, and was shown how to use a Jenkins job the pulp team uses to install various versions of pulp.
Controlling the pulp-server remotely
Pulp can be installed in a distributed fashion, as well as be controlled
remotely by the pulp-admin client. I used pulp to both pull in existing RPM
repositories as well and create a new RPM repo with a few RPM files, and enable
a system to install the package via
dnf from the repository hosted on the
Midway through the week I took a brief break from working on Pulp while one of the Satellite6 (downstream project of Katello) QE team, came over and led us through a demo of docker and how he uses it to run an automated test suite against Satellite6 (which uses Pulp).
At one point I had ten containers running this test suite all hammering a Satellite6 instance! Watching my system monitor was quite entertaining as the many thousands of tests ran. Then I walked through a demo of how Satellite6 works from a user perspective. This gave me a better idea of what role Pulp plays in Satellite6. Customizing the content provided to different machines or groups of machines is a powerful tool for system administrators using Satellite6, and Pulp is the workhorse behind this functionality!
Finally, I got into working with pulp-smash, which is the test suite for pulp. After setting up a python virtual environment and installing the developer requirements, I did some exploration of the project in ipython. Armed with this knowledge and our different pulp VM’s, I wrote a test that ensured, if the pulp version being tested was of a sufficient version, a command could be executed and that it returned with a successful exit code. By the end of the day Friday I had one pull request updating the documentation to Pulp, and another adding a test to Pulp-Smash. I had a lot of fun and I’m looking forward to all I have to learn and the opportunity to contribute to such an active open source project.
|
OPCFW_CODE
|
what if noise of Microsoft.Network/networkInterfaces and Microsoft.Network/privateEndpoints
Describe the noise
Resource type (i.e. Microsoft.Storage/storageAccounts)
Microsoft.Network/networkInterfaces@2022-01-01
Microsoft.Network/privateEndpoints@2022-01-01
apiVersion (i.e. 2019-04-01)
2022-01-01
Client (PowerShell, Azure CLI, or API)
az cli
Relevant ARM Template code (we only need the resource object for the above resourceType and apiVersion, but if it's easier you can include the entire template
#1
param networkInterfaces_yangshenvm0012632_name string = 'yangshenvm0012632'
param publicIPAddresses_yangshenvm0012_ip_externalid string = '/subscriptions/79ed831f-c7b8-402e-a226-de8aa5f4764a/resourceGroups/yangshentest001/providers/Microsoft.Network/publicIPAddresses/yangshenvm0012-ip'
param virtualNetworks_yangshenrg001_vnet_externalid string = '/subscriptions/79ed831f-c7b8-402e-a226-de8aa5f4764a/resourceGroups/yangshenrg001/providers/Microsoft.Network/virtualNetworks/yangshenrg001-vnet'
resource networkInterfaces_yangshenvm0012632_name_resource 'Microsoft.Network/networkInterfaces@2022-01-01' = {
name: networkInterfaces_yangshenvm0012632_name
location: 'eastus'
kind: 'Regular'
properties: {
ipConfigurations: [
{
name: 'ipconfig1'
properties: {
privateIPAddress: '<IP_ADDRESS>'
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIPAddresses_yangshenvm0012_ip_externalid
}
subnet: {
id: '${virtualNetworks_yangshenrg001_vnet_externalid}/subnets/default'
}
primary: true
privateIPAddressVersion: 'IPv4'
}
}
]
allowPort25Out: true
dnsSettings: {
dnsServers: []
}
enableAcceleratedNetworking: true
enableIPForwarding: false
}
}
#2
param privateEndpoints_yangshenpe001_name string = 'yangshenpe001'
param storageAccounts_yangshendls005_externalid string = '/subscriptions/79ed831f-c7b8-402e-a226-de8aa5f4764a/resourceGroups/yangshentest001/providers/Microsoft.Storage/storageAccounts/yangshendls005'
param virtualNetworks_yangshentest001_vnet_externalid string = '/subscriptions/79ed831f-c7b8-402e-a226-de8aa5f4764a/resourceGroups/yangshentest001/providers/Microsoft.Network/virtualNetworks/yangshentest001-vnet'
param privateDnsZones_privatelink_blob_core_windows_net_externalid string = '/subscriptions/79ed831f-c7b8-402e-a226-de8aa5f4764a/resourceGroups/yangshentest001/providers/Microsoft.Network/privateDnsZones/privatelink.blob.core.windows.net'
resource privateEndpoints_yangshenpe001_name_resource 'Microsoft.Network/privateEndpoints@2022-01-01' = {
name: privateEndpoints_yangshenpe001_name
location: 'eastus'
properties: {
privateLinkServiceConnections: [
{
name: privateEndpoints_yangshenpe001_name
properties: {
privateLinkServiceId: storageAccounts_yangshendls005_externalid
groupIds: [
'blob'
]
privateLinkServiceConnectionState: {
status: 'Approved'
description: 'Auto-Approved'
actionsRequired: 'None'
}
}
}
]
manualPrivateLinkServiceConnections: []
subnet: {
id: '${virtualNetworks_yangshentest001_vnet_externalid}/subnets/default'
}
customDnsConfigs: []
}
}
resource privateEndpoints_yangshenpe001_name_default 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups@2022-01-01' = {
parent: privateEndpoints_yangshenpe001_name_resource
name: 'default'
properties: {
privateDnsZoneConfigs: [
{
name: 'privatelink-blob-core-windows-net'
properties: {
privateDnsZoneId: privateDnsZones_privatelink_blob_core_windows_net_externalid
}
}
]
}
}
Expected response (i.e. "I expected no noise since the template has not been modified since the resources were deployed)
It showed deletion on fields but the deployment won't delete them.
Current (noisy) response (either include a screenshot of the what-if output, or copy/paste the text)
Additional context
Add any other context about the problem here.
Any update about the PrivateDNSzoneGroup noise which comes for any private endpoint. Its very difficult to proceed with What-If output since if we have 15 PE then for 15 times this modification would be displayed. Please provide us some update on the ETA to fix this.
I have the same issue. It would be nice to be able to somehow state that this is a false positive somewhere.
~ Microsoft.Network/networkInterfaces/nic02 [2022-07-01] - kind: "Regular" - properties.allowPort25Out: true + properties.auxiliaryMode: "None"
Same problem. Every What-If deployment shows that my Microsoft.Network/networkInterfaces kind and properties.allowPort25Out values will be deleted. These seem like internal properties, and there is no way to set them using a template.
|
GITHUB_ARCHIVE
|
objective-c memory management
i have some questions about objective-c's memory management,
let's say:
NSString * test= [[NSString alloc] init ]
test=@"msg";
[object setStr1: test ]; // declared as: @property(copy, readwrite)
[object setStr2: test ]; // declared as: @property(retain, readwrite)
[object setStr3: test ]; // declared as: @property(assign, readwrite)
test=@"some other string"
I think str1 will have a copy of tests content: str1 will point to one adress of the memory (heap) that contais msg, this address is not the same pointed by test. right?
about str2:
1. what does it store?, i guess the same address that points test, but it will increase the reference counter of test to 2.
2. when i change the test's content, what does str2 have? I guess it still points to msg
about str3: it's incorrect, right?, what does assign do?
thanks.
bonus question:
NSString * test= [[NSString alloc] init ]
test=@"msg";
test=@"something";
should i release test before changing its content?
The most important thing to take away here: The assignment operator = never mutates (i.e. changes) an object. Mutating an object can only be accomplished by sending it messages (e.g., sending appendString: to an NSMutableString). The assignment operator simply causes a pointer to point to a different object than it did before.
Thus, it is incorrect to say:
(1) NSString * test = [[NSString alloc] init];
(2) test = @"msg";
Line (1) creates an NSString object, and assigns test to point to it. Line (2) does the same thing: it creates a new, unrelated NSString object, and assigns test to point to it. Now the original NSString created by line (1) has nothing pointing to it, and is leaked.
Also, you never need to alloc a string literal; the compiler does this implicitely when you use the @"..." syntax. In general, you will very rarely have to use [NSString alloc] at all (only when you want to use the various init* methods, such as initWithFormat:, etc.)
str1 will point to a distinct copy of the test string. (Errata: According to Eiko, the receiver will simply treat this as a 'retain' if it is immutable. This makes no practical difference if you are behaving correctly.)
str2 will point to the same location as test, and the retain count of the object there will be incremented.
str3 will point to the same location as test, but the retain count will not be incremented.
Generally speaking, strings are immutable, so you cannot change their content. You may have to watch out for instances of NSMutableString, however, which is a subclass of NSString. This is why many people recommend copying strings instead of retaining them, so that, should the string be mutated by another part of the program, your object's copy will be unaffected.
That means if test was a NSMutableString and you’d change its content str1 will still read the original value while str2 and str3 will show the new value.
I have edited my answer to more clearly spell out the depth of the problem here, Eiko, having not seen your answer in the meantime.
The secret retain used by NSString's copy is an implementation detail. Since NSString is immutable, there is no point in allocating another instance. For most classes, it would make a true copy, and can be treated that way even with NSString.
so, what will - (NSString *) stringByAppendingFormat:(NSString *)format ... return?
@Chuck - It is mentioned that this detail shouldn't matter.
@jhon - It will return a new, autoreleased string, leaving the receiver (the string you invoked the method on) untouched. All methods whose names sound like a description of an object behave this way.
With your second line you already leak memory, because you reassign test to a new object and lose the reference to the object you created in your first line.
Your conclusion to str1 is wrong, because the copy might just return self for immutable types (they don't change anyway, so often the system is smart enough to keep them around just once).
str2 will indeed point to the same object and just increment the retain count. You cannot change test's content, as it is immutable. If it was an NSMutableString, then yes, str2 would show this change, too.
assign for str3 will just "copy the address", so it points to the same object (as str2), but it does not retain it, so it doesn't claim any ownership/interest in that object. If you release it elsewhere, str3 will point to dead memory.
Bonus: As in my introduction, yes, you leak. Assigning @"msg" makes you leak the original object, as @"msg" will create a new one.
No, @"msg" won't be autoreleased, but it is a constant and therefore does not need to be released.
where is stored @"msg", in the heap?
@jhon, those strings literals are stored in the data segment of the binary. From there they get loaded into memory (just as the code is) when the program starts. This is not the heap.
Thanks, removed that sentence.
Although NSString literals are not stored on the heap, this makes little practical difference to the code you write.
@sven: ok, but then, what happen if i have something like this: NSString * name=@"abc"; name=someTextField.text; ?. I think that abc will be stored on the data segment as a constant. but, what about someTextField.text (a user wrote its name in the text field)? where is stored that data?
@jhon: So name first points to the @"abc" string literal which lies somewhere in the data segment of the program. And after the second statement name points to a different NSString object which probably lives somewhere on the heap. But you cannot know this for sure, and it does not matter. Just use retain and release in the right places and you don’t have to worry about any of this.
|
STACK_EXCHANGE
|
Urgent "Timeline error: [API] 34: Sorry, that page does not exist"
After today's changes, downloading on the https://twitter.com/user page will display "Timeline error: [API] 34: Sorry, that page does not exist"
Started getting this error as well, however, it still appears to function and download files.
Maybe it is caused by Musk's new viewing policy?
I have encountered the same error, meanwhile javascript https://greasyfork.org/scripts/423001-twitter-media-downloader shows the same question.
Started getting this error as well, however, it still appears to function and download files.
It does? How do you get it to still function regardless of that error?
It can download individual images (so can right click > save as...), but videos and bulk account downloading are both broken.
Downloading a video gives:
Failed to get tweet information !
and bulk downloading gives the aforementioned:
(*) Timeline error: [API] 34: Sorry, that page does not exist
I'm not knowledgable about webdev stuff, but it was mentioned in https://github.com/dimdenGD/OldTwitter/issues/90#issuecomment-1616661075 that there were some API changes with the recent changes. Most issues with that addon have been fixed now though, so hopefully it can be sorted here too.
I wonder if
Started getting this error as well, however, it still appears to function and download files.
this was only tried on images from a single tweet?
It's works again now. I think there's problem with Twitter API yesterday.
Browser? Verified account? Which version of twMediaDownloader? Still work now?
It's works again now. I think there's problem with Twitter API yesterday.
Here it still doesn't work. If I only leave the boxes "Images", "Videos(GIF)", and "Videos" checked, the program downloads either until 156 MiB has been reached, or the number 840 or 841 of tweets have been searched. Then it glitches out due to Timeline error API 34 again. The folder with the media captured so far is downloaded, though. It just never goes beyond those numbers.
I think now it gives Error 34 even on a successful download using
these options, so it isn't currently an issue there. I've checked after downloading an account, and the last media item downloaded matches to the first posted on that account.
Here it still doesn't work. If I only leave the boxes "Images", "Videos(GIF)", and "Videos" checked, the program downloads either until 156 MiB has been reached, or the number 840 or 841 of tweets have been searched. Then it glitches out due to Timeline error API 34 again. The folder with the media captured so far is downloaded, though. It just never goes beyond those numbers.
I've downloaded past that filesize, but not that many tweets, so it could be that you just reach the end of that accounts media.
I'm not verified and definitely downloaded far past the rate limit today in media, but it's definitely still broken in the same way with text-only and RTs though, so there's still a related issue.
Just tried one with more images, it does seem to stop around 840-850 images in, independent of filesize. Probably a new API thing to go along with the rate limit.
Either that, or around 2019 date-wise when I tried it with two different profiles. But someone above said Gallery-DL works, so maybe there's a hint there.
Hi. I just tried using Gallery-Dl (on MacOS, through Homebrew). I tried it on a twitter page withmore than 5k images. It downloads more or less the same amount as the twMediaDownloader app downloads (around 1k images). Then it stops and shows "404 Not Found (Sorry, that page does not exist)."
I guess they did something more complex to the API. Something not even Gallery-dl can work around so far.
Hi. I just tried using Gallery-Dl (on MacOS, through Homebrew). I tried it on a twitter page withmore than 5k images. It downloads more or less the same amount as the twMediaDownloader app downloads (around 1k images). Then it stops and shows "404 Not Found (Sorry, that page does not exist)."
I guess they did something more complex to the API. Something not even Gallery-dl can work around so far.
The thing is, I tried setting the "before" datetime to around the point where it encountered the error, and it still won't go further than that.
The thing is, I tried setting the "before" datetime to around the point where it encountered the error, and it still won't go further than that.
@CinderBH are you sure that tweets exist prior to that point that match your filters?
Does anyone know how to make it work for searches or hashtags? It doesn't pull any tweets when done as searches.
The thing is, I tried setting the "before" datetime to around the point where it encountered the error, and it still won't go further than that.
@CinderBH are you sure that the account has tweets prior to that point that match your filters?
Okay, so I tried scrolling the media tab of the artist I was trying to backup manually and it does stop loading tweets in that spot. But I went around to the Danbooru tag checking sources, and older tweets are still there. So, something is wrong with the scrolling, or it's being throttled.
True, after attempting with three artist, the fact was as you mentioned. The error occurs when scrolling to the oldest tweet visible. However, by utilizing advanced search options such as adding the "until" keyword, older tweets can be viewed. So, this could possibly be a hit to solve the problem? I dont know.
I was looking into a way to get the information, and I noticed there is an XHR request to TweetDetail that happens when loading a tweet. Which seems to contain the information this extension used to grab via the token.
Just tried one with more images, it does seem to stop around 840-850 images in, independent of filesize. Probably a new API thing to go along with the rate limit.
Can confirm that this exists and is a pressing issue.
I have examples of tweets going back to 2018 for an account, tweets I've confirmed are still up, but tw-media-downloader can't go back that far, capping out at 850~ tweets sometime in 2022.
Even if you start the download right before where it happens, it will still error out with the message we've all been getting, so downloading in blocks of 850 images isn't a workaround.
Timeline error: [API] 34: Sorry, that page does not exist
Okay, so apparently the legacy API that Tweetdeck also used was quietly turned back on. Legacy Twitter is, probably, still gone (unless you somehow kept a tab open the whole time), but twMediaDownloader's bulk downloads now seem to work again. I can't check right now but, anybody wants to test this?
twMediaDownloader's bulk downloads now seem to work again.
From what I can see, the 3 accounts on which I encountered issues with 850ish tweets appear to work again, and I'm no longer getting the api error.
Okay, so apparently the legacy API that Tweetdeck also used was quietly turned back on. Legacy Twitter is, probably, still gone (unless you somehow kept a tab open the whole time), but twMediaDownloader's bulk downloads now seem to work again. I can't check right now but, anybody wants to test this?
(Video downloads are still broken though.)
Hi. Unfortunately, after trying on 3 different profiles with more than 10k images each, I get the same results as before. Download of images stops at around 850 images. I tested with different boxes (Images, Videos, etc.) checked and nothing seems to change. No error comes up; it just doesn't go any further than that number of downloaded images.
Hi. Unfortunately, after trying on 3 different profiles with more than 10k images each, I get the same results as before. Download of images stops at around 850 images. I tested with different boxes (Images, Videos, etc.) checked and nothing seems to change. No error comes up; it just doesn't go any further than that number of downloaded images.
Odd, I just tried it on a profile with about 3300 images and it went all the way to the end.
Still getting this error, does anyone have a fix?
Still getting this error, does anyone have a fix?
Sadly, Furyutei announced they're closing this and other Twitter-related GitHubs tomorrow. Hopefully someone will fork them.
|
GITHUB_ARCHIVE
|
You are given a straight street with a bunch of homes on it. You have to place a fire hydrant somewhere on the street, such that the total distance between each home and this hydrant is minimal. Explain how you would select the location of the hydrant and argue your solutions running time with respect to the number of houses.
1------2---3-----F----4---------------5----------6 \-------D1-------/ \----D2---/ \--D3-/ \-D4-/ \--------D5----------/ \-------------D6----------------/ D1+D2+D3+D4+D5+D6 must be minimal
Consider a hotel that places visitors to rooms after hashing (simple uniform) their names and assigning them to the room with their hash result. For example, if your name is hashed to 117, then you have to stay in room 117. However, if the room is busy, you are placed on a waiting list for that room. Each visitor is expected to spend 1 to 5 days in the room, with a uniform distribution. Assuming there are rooms in the hotel, and visitors arrive every day, what is the expected waiting time for a newly arriving visitor on day ?
Show that the notion of a randomly chosen binary search tree of n nodes, where each tree is equally likely to be chosen is different than the randomly built binary search tree covered in the lecture. (Hint: Show that they are different for n=3).
a. Consider a stack of fixed capacity k, where we copy the stack contents to another stack for backup purposes every k operations. For example, if k is 5, after 4 push and 1 pop operations, we copy all stack contents to another stack (assume the stack is implemented using an array and you are just copying the array contents). If the stack is full, the next push operation is ignored, and if the stack is empty, the next pop operation is ignored. In both cases, the cost is assumed to be 0. Show that the cost of n stack operations is .
b. Now consider a stack of size 2k and solve the problem, again. You still make a backup every k operations.
The optimal location is determined by the median of the houses locations on the street. There are two potential cases:
Since there are 10 rooms in the hotel and 4 visitors are arriving each day, and the hasing is uniform, the expected visitor load per room per day is . Since each customer spends from 1 to 5 days, uniformly, the expected time spent per customer is . Hence, each day, each room is loaded with days of stay. At the end of each day, 1 days of stay is removed from each room, leaving an expected days load per room. After t days, the load accumulates to days of wait. So, on day the first customer is expected to wait for days before he gets in his/her room.
For n=3, let's assume the keys are 1,2 and 3. With 3 keys, we have the following trees possible:
A B C D E 1 1 2 3 3 \ \ / \ / / 2 3 1 3 1 2 \ / \ / 3 2 2 1
So, out of 5 possible trees, we can choose 1.
However, if the input is randomized, then,
1-2-3: A 1-3-2: B 2-1-3: C 2-3-1: C 3-1-2: D 3-2-1: E
For each permutation, we get the above trees. One can easily see that, the possibility of getting tree C is not 1/5 as in the previous case. It is actually 1/3 with randomly built BST.
a. The actual cost of push, pop and copy are 1, 1, and k, at most. The fact that sometimes push and pop can be worth 0 cost, doesn't really change the following argumentation. We can assign the amortized costs of 2, 2 and 0 to these operations. The 2 on push is 1 for the actual push, 1 for the upcoming copy, the 2 on pop is 1 for the actual pop and 1 for the upcoming copy operation. This way, every k operations, we accumulate an additional k payment to be used for the copy operation (even when there are less items in the stack). Therefore the copy operation itself can be executed with no amortized cost. Considering n operations the amortized cost becomes O(n).
b. This time we will use amortized costs of 3,3, and 0. Now, the 1 of 3 is for the actual push, and 2 is reserved for the copy. Same with the pop operation. Therefore, every k operations, we accumulate 2k payment for the copy. And the overall running time is still O(n).
|
OPCFW_CODE
|
The March/April Issue of IEEE Software, as usual, is chock full of interesting articles on challenges and advances in software engineering. The topics in this issue range from the always popular topics of DevOps and security to the related but separate topic of release engineering.
One special addition to this issue is a special thanks to all those that participate in the reviewing efforts in 2017. Of course the reviewers help make IEEE Software the magazine that it is, so thank you from us all!
As with each issue, this issue includes a special focus topic: Release Engineering. The following articles are included in the March/April issue of Software on release engineering:
- "The Challenges and Practices of Release Engineering" by Diomidis Spinellis
- "Release Engineering 3.0" by Bram Adams, Stephany Bellomo, Christian Bird, Boris Debić, Foutse Khomh, Kim Moir, and John O'Duinn
- "Continuous Experimentation: Challenges, Implementation Techniques, and Current Research" by Gerald Schermann, Jürgen Cito, and Phillip Leitner
- "Correct, Efficient, and Tailored: The Future of Build Systems" by Guillaume Maudoux and Kim Mens
- "Continuous Delivery: Building Trust in a Large-Scale, Complex Government Organization" by Rodrigo Siqueira, Diego Camarinha, Melissa Wen, Paulo Meirelles, and Fabio Kon
- "Over-the-Air Updates for Robotic Swarms" by Vivek Shankar Varadharajan, David St. Onge, Christian Gub, and Gionvanni Beltrame
The articles "The Challenges and Practices of Release Engineering" and "Release Engineering 3.0" set the tone for the focus articles in this issue. The two articles provide some background on release engineering and discuss the state of the art in the field. Each of the other articles take a deeper dive into more specific aspects of release engineering. For example, in the article "Over-the-Air Updates for Robotic Swarms", the authors present a toolset for sending code updates over-the-air to robot swarms.
One topic that always seems to find its way into each issue of IEEE Software is agile software development. The following articles appeared in this issue on agile development:
- "Practitioners' Agile-Methodology Use and Job Perceptions" by Wenying Sun and Cecil Schmidt
- "Making Sense of Agile Methods" by Bertrand Meyer
- "Agility, Risk, and Uncertainty, Part 1: Designing an Agile Architecture" by Michael Waterman
In "Practitioners' Agile-Methodology Use and Job Perceptions", the authors report on a survey conducted to better understand practitioner perceptions of agile methodology use. Similarly, "Making Sense of Agile Methods" provides insights into agile methodologies based on personal experiences of the author.
Wanna know more? Make sure you check out the March/April issue of IEEE Software today!
IEEE Software Blog
The blog was a little light for March and April (as many of us had deadlines we were meeting). But of course we always try to make sure there's some kind of knowledge sharing going on!
For those who are also a little behind on things and want a quick way to catch up on the previous issue (January/February), there's a summary posted on the blog.
In the post "Which design best practices should be taken care of?", the authors report on the results from a survey sent out to learn more about the importance of design best practices. The post also reports on some of what was found to be the more important design concerns, such as code clones and package cycles. For those interested, there's also a reference to the authors full research article on this work.
The other April post, titled "Efficiently and Automatically Detecting Flaky Tests with DeFlaker", the authors present a new approach, called DeFlaker, that can be used to detect flaky tests (without having to re-run them!). The post also includes some details on the evaluation of DeFlaker and other relevant resources (such as a link to the project's GitHub page and full publication) for those interested in learning more about this tool.
The SE Radio broadcasts this issue were also a little light, but of course no less interesting! This issue is a technical one, with all discussions focused on various technologies. Nicole Hubbard joined SE Radio host Edaena Salinas to talk about migrating VM infrastructures to Kubernetes -- for those of you (like me) who have no clue what that is, they talk about that too!
Nate Taggart spoke with Kishore Bhatia about going serverless and what exactly that means.
And lastly but certainly not least in this issue, Péter Budai sat down with Kim Carter to talk about End to End Encryption (E2EE) and when it cane (and should) be used.
Also, for those looking for some extracurriculars to fill their free time, SE Radio is looking for a new volunteer host! For more information, see the SE Radio website: http://www.se-radio.net/2018/03/seeking-a-new-volunteer-host/
|
OPCFW_CODE
|
Three reasons learning web3 programming will make you a better web2 developer
Updated: Feb 7
1. Gas Optimization is a transferable skill
At first it may seem funny to invest so much energy into saving 256 bits in storage space. This is 2022, storage is supposed to be cheap.
Yes, this is true. But storage and bandwidth are not cheap at scale.
When I consult for startups, many of them are surprised by their bandwidth costs in the cloud. When probed, it’s clear that cutting down on size of the data is usually an afterthought.
Now of course, obsessing over saving bytes is not necessarily a good use of time in a traditional cloud application. However, the training one gets in saving space in a blockchain application makes space saving in a traditional web2 application mentally automatic.
Gas optimization forces you to be obsessed with the details. Representing your data compactly becomes second nature after some practice.
Saving 1kb of data might not seem like much, but saving 10% of egress will be a noticeable cost saving.
2. Thinking like an attacker
Security is almost an afterthought when it comes to training web2 developers. Hackers feel invisible to the point of being non-existent.
However, web3 hacking is always visible due to the nature of the blockchain, and the fact that we get regular publications about it. (rekt.news/ is a great resource). This means the average web3 developer has the hacker more on his or her mind when they are developing applications. Having a healthy amount of paranoia while programming makes us program safer.
Programming smart contracts forces us to think things like “shouldn’t this be access protected?” or “What if the data we get back is flawed or tampered with?”
Again, the intent is not to slow down development by worrying about security. It’s to program our brains to notice security flaws automatically.
3. Grappling with the fundamentals again
Being forced to solve the same problem in a new way expands our thinking.
Many people experience this when using functional programming for the first time. Although functional programming and imperative programming ultimately accomplish the same thing, being forced to model our problem in an entirely new way enables us to look at the problem from a wider perspective. Even though many don't fully adopt functional programming completely, it's hard to not use map and filter after you learn about them.
We take for granted that function execution is not atomic, data can be hidden, and that data doesn’t last forever by default. These assumptions are taken for granted in web2, but not web3.
But what if the fundamentals are changed?
Being forced to use an alternative set of first principles helps us understand our unexplored assumptions about how to model computation and thus understand them better.
If this interests you, apply to our bootcamp! Even if you don’t plan on fully switching to blockchain development, your general programming skills will be enhanced.
|
OPCFW_CODE
|
I’ve downloaded and modified the L4T Driver Package (BSP) and Sample Root Filesystem. Now I want to move these folders to another machine. I try to copy entire folder to an USB drive but it doesn’t work and clearly not a good method. Is there an existing tool or procedures to zip these folders like what they were at the beginning?
Bundle L4T Driver Package to move to another machine
FYI, the file permissions and ownership must be preserved. Does your USB drive have a Linux filesystem? If it is a Windows filesystem, e.g., VFAT, then there is no possibility of this being preserved. You can see filesystems and types (with the thumb drive plugged in) via “
df -H -T”. You could replace the partition with an
ext4 partition as one workaround if it is VFAT or one of the
FAT systems or NTFS.
Alternately, you could use a correct
rsync command to package the content’s metadata into the archive instead of a direct copy of files/directories. Then you’d copy a single archive file to the thumb drive, and unpack at the other end.
Regardless of direct copy or archive container copy you would have to be sure to use commands which exactly preserve permissions and numeric IDs.
Thanks @linuxdev. I will take that into consideration. I also have other questions.
- Should I package the LinuxforTegra folder with the rootfs folder inside? Or should I package these two folders seperately?
- Should I ever manually modify the rootfs folder?
- What is the purpose of ./apply_binaries.sh?
I’d just package them together. Boot-related content needs to be compatible with the rootfs content (release versions). On the other hand, stock
rootfs content is available for separate download. Just to clarify:
- The actual flash software is the “driver package” (named as such because it understands a recovery mode Jetson, which is in turn a custom USB device).
- If all you do is to command line flash, then you would download these:
- Driver package.
- Sample rootfs.
- Then you’d:
- Unpack the sample rootfs as root in the
rootfs/directory, thus preserving permissions and ownerships.
- Run “
sudo ./apply_binaries.sh” once to add the NVIDIA content to the sample rootfs. This is when it becomes “L4T” and not just Ubuntu. The sample rootfs is 100% pure Ubuntu, and it is distributed and licensed as such. When the end user runs “
apply_binaries.sh” as root (with
sudo) to add this content, it is the end user accepting the additional content to what used to be a pure Ubuntu rootfs.
- Using JetPack/SDKM is just a front end. Using this will automatically download a matching sample rootfs and driver package, then automatically
apply_binaries.sh. When done manually, then the end user has to do this. JetPack also has the ability to install optional software, which the driver package does not do, but that step only occurs after flash completes and the Jetson fully reboots. Using JetPack means you don’t need to manually unpack or install either the driver package or the sample rootfs. Regardless of whether it is done by the end user or JetPack/SDKM
apply_binaries.shonly needs to run once.
- Unpack the sample rootfs as root in the
During a flash the “
rootfs/” content is almost an exact match to the root partition image. The “almost” part comes because arguments for the particular module and carrier board type will change a few details. Those details are reflected in flash probably adding or editing the particular kernel, device tree, and
extlinux.conf configuration in the “
rootfs/boot/” prior to generating an image. Everything else is untouched and used verbatim in the final flash.
So long as it isn’t one of the things being modified prior to generating a partition image it is quite useful to modify content in “
rootfs/”. For example, user accounts can be added, custom aliases can be put into “
/etc/skel/”, actual end user home directories can be added,
ssh keys can be added, network settings can be customized, so on.
Incidentally, you can also loopback mount a raw clone to the “
rootfs/”, and use that for generating a partition image. This would modify the clone’s “
/boot” content, which you might not want, but otherwise it will be the clone (e.g., if you installed and ran all updates, then future flashes this way will also have those updates). If you copy a clone to “
Linux_for_Tegra/bootloader/system.img”, and use the command line “
-r” option to reuse the image, then you get a 100% verbatim exact match (no “
/boot” content edit). You would still get the other content related to boot partitions installed.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
Over our past two articles, we’ve taken a look at the first half of the four basic concepts that drive a successful reptile retailer. With this article, we move from selection and inventory to our third concept: presentation.
In our observation of pet stores throughout the nation, we’ve seen some phenomenal reptile displays and some displays that simply defy all sense of professionalism. Your ability to create, maintain and present attractive reptile displays has a direct effect on your ability to sell those reptiles.
A well-presented reptile section can easily draw would-be customers into your store. Even those that aren’t necessarily interested in reptiles can find themselves making their way through previously-unexplored aisles of your store if they see a display that’s pleasing to the eye. By putting in some extra time and effort on preparing your reptile displays, you ensure that your store has its best face forward.
Form and Function
Your displays need to provide three primary functions. First, reptile displays must be attractive. Basic glass cages with little decoration will not capture the eye of potential customers. Fill your cages with light, attractive substrate, hiding places and decorations that not only provide your animals a quality home, but also look great to your customers.
Based on your reptiles’ individual needs, you may be able to create themed enclosures. One such cage might be based on the American Southwest, while another might resemble a tropical rainforest. These varied, colorful displays catch the eyes of customers and immediately start them thinking about potential displays of their own.
Second, the location of these cages should factor greatly into your decision-making. Consider placing these displays to the center or even the front of your store. If you’re going to invest the time and effort to make your displays impressive, show them off. That set of eye-catching displays will encourage your customers to move deeper into your store, especially if the elements used to make that display are on sale nearby.
You should show off the products for sale in your store within the displays for your reptiles. If a hide, a water bowl or a type of substrate is on sale within your walls, your customers should be able to see that product in use. In fact, consider utilizing the very cages you already stock for sale when preparing reptile displays. Commercial caging from brands like ZooMed and ExoTerra provide a solid baseline from which you can create those attractive, striking displays.
Think of these in the same manner that you might think of displaying aquarium display elements. When your reptile display can prompt a customer to think, “That would be neat to have in my turtle’s cage,” it’s likely that you’ve just made a sale.
Third, you must keep the needs of your individual reptiles in mind. While this starts with the basic care requirements of a given animal, think of the animals’ habits when placing the display itself. Turtles and tortoises, for instances, should generally be placed closer to ground level. More arboreal creatures, such as tree frogs and chameleons, should occupy higher shelves. Your biggest selling reptiles, such as bearded dragons, are typically best placed on middle shelves so that they can be easily seen.
Your staff should be involved in creating these displays from their very inception. By including your staff in display planning, you gain two vital benefits. You ensure an immediate buy-in from your staff as you grant them a degree of creative control over the displays. Also, you ensure that your staff is educated about the animals in your care and the items you have for sale.
Maintenance Is Key
Once you’ve set up your spectacular reptile displays, the difficulty then falls to maintenance. Daily keep up should be part of your staff’s protocol, but reevaluate what the word “daily” truly entails. The best, most successful pet stores never allow their displays to fall into disrepair or filth. If a cage is dirty, it gets cleaned instantaneously. If a water or food dish is dirty, it gets scrubbed out and refilled.
Animals in dirty cages are animals that don’t get sold. Keep those displays spotless and your sales will stay significantly higher. Again, your willingness to go above and beyond in maintaining your displays can be a make or break factor in selling animals and getting your customers to return.
Signs of Success
Finally, don’t forget about signage. Many retailers experience difficulties in selling elements like live food because they don’t advertise the fact that they have live crickets, mealworms or the like. Be sure that both your reptile displays and your subsidiary materials have clear, focused signage that directs customers toward those items that aren’t available in your local big-box store. When you can clearly dictate the message that your store carries items unavailable elsewhere, you ensure that business returns to your store time and again.
|
OPCFW_CODE
|
Because you can doesn't mean you should
Posted 11 January 2011 - 10:58 AM
Posted 11 January 2011 - 11:00 AM
Posted 11 January 2011 - 11:26 AM
Posted 11 January 2011 - 11:59 AM
Crack not only kills brain cells, it kills cars as well.
Posted 11 January 2011 - 01:25 PM
I'm scrolling side to side,up and down to see the whole picture.
At first I don't notice what is odd,
then I almost fell over.
Posted 11 January 2011 - 01:29 PM
Posted 11 January 2011 - 01:49 PM
Posted 11 January 2011 - 01:50 PM
Posted 11 January 2011 - 02:17 PM
maybe it's a Canadian thing?
Winters are too long, giving them alot of garage time to come up with rides like these
Posted 11 January 2011 - 02:21 PM
Posted 11 January 2011 - 02:38 PM
I'm sure you've heard of the Ford Mustang.........this is the Ford Discust-ang.........
besides the overuse of trinkety accessories i like this car alot. pretty cool to see a wide body Fox Mustang. Nissan GT-R badges on the door panels though
I gotta agree with Dave here, it's not too bad. I also would have left off the Datsun badges, plus used LX tail lights, not painted the headlights (I do like the use of the T-Bird headlights though!), reworked the lower portion of the front bumper so it looked better, got rid of the scoop and side vents, reworked and/or completely eliminated the stripes, and painted the grills flat black.
Posted 11 January 2011 - 02:40 PM
and another car from around here
Hey adam, I see that one all the time up here also. I mean it might not be done to your taste, but inside and out it's VERY nicely done, one of the best "customized" cars I've seen around here. In person the car accually looks alot better.
Edited by Jared Roach, 11 January 2011 - 02:45 PM.
Posted 11 January 2011 - 03:23 PM
This one was posted on a Cherokee forum today, it looks like alot of effort was put into it, but just sooooooo wrong at the same time
Saw this last winter on my way to work...and yes, there was a guy driving it!
|
OPCFW_CODE
|
[3/21] Presented at the FEniCS'18 conference Oxford University
on Designing a compiler to visualize FEM
Advisee presenting Diderot: A Domain-Specific Language for Visualizing FEniCS Functions
[3/12] Presented at the Dagstuhl seminar (loop optimization) in Germany
on Rewriting with an index-based Intermediate Representation
[3/5] Invited talk Washington and Lee
[3/2] Invited talk UMass
[2/28] Invited talk at Galois
[2/20] Invited talk at Gamma Tech
Paper Compiling Diderot: From Tensor Calculus to C submitted to TOMS
[2/12] Invited Talk at TCNJ
[2/9] Invited talk at Bucknell College
[2/5] Invited talk at Wellesley College
My academic "job talk" slides.
[1/29] Invited talk at National Renewable Energy Lab
Paper Rendering and Extracting Extremal Features in 3D Fields submitted to EuroVis
I am a postdoctoral researcher at the University of Chicago, in the systems group. I have the pleasure of working alongside the
wonderful John Reppy and Gordon Kindlmann.
I earned my PhD in Computer science at the University of Chicago in Spring 2017.
I work on addressing computational needs with a new programming language, Diderot. I am interested in language design, compiler optimization, automated testing, and tools for scientific visualization.
I have been a research intern at the Imperial College of London, and a software intern with a Chicago start-up company.
I have worked as a lab teaching assistant for a range of courses at the University of Chicago.
I have been awarded the GAANN fellowship and have been supported as a research assistant.
Prior to entering graduate school, I earned my B.S. degree from Gettysburg College, where I graduated Cum Laude and with Honors in Physics.
I currently live in Chicago and enjoy the city. In my free time I like to bike along the lake, find new interesting restaurants, travel, paint, and swim.
The Diderot Project
My work is in the development of Diderot, a domain-specific language for scientific visualization and image analysis.
A good place to start reading about the Diderot language is with our early PLDI paper
and slides that describe the programming model
Deep details about the design, implementation, development, and testing of the language are available in my dissertation
Applications of the language are demonstrated with these example
The official (not regularly updated) website Diderot and software source code written in SML
For a quick summary of our more recent work check out these slides and look for furthur details described under research
charisee.chiw at gmail.com
|
OPCFW_CODE
|
edit: having a community dedicated to letists only can be a bad idea in that it can make sure your beliefs are not questioned. I have thought myself as a socialist and I have thought myself as a anarcho-capitalist, I don’t believe in either anymore. I think if radical views go unchecked they might cause problems. Although I am a capitalist now, being confronted by socialists has made me aware of capitalisms deep flaws. When I considered myself a communist (17 year old me) I thought opposing views really changed my mind. So that’s the ideologically diversity I am talking about.
I love the outlook of lemmy, I think the design is decent and simplistic. But one thing I can’t seem to get over is the fact that almost everyone here seem to think the same politically. Why do you guys think this is?? I know this is a community of leftists foss enthusiasts but I hope everyone here is aware that it is driving many people away from adapting it.
A loosely moderated place to ask open ended questions
If your post is
it’s welcome here!
It’s an interesting point, but what’s the alternative? Activity-based ranking like forums and imageboards might be appealing, although it is worth acknowledging they naturally bias towards controversy, for better or worse. Probably worse for a ‘centrist’, which is a relative non-term but I assume it means pro-status quo liberalism who doesn’t like radical ideas.
You can actually set that up on Lemmy’s front page (not comments) by using that New Comment sorting, but that’s not a default anywhere so its not relevant unless an instance chooses it as a default.
Anyway, groupthink is an interesting problem in the case of recuperation and ‘sanewashing’ of ideas. reddit’s /r/antiwork is a prime example: it was initially anarchists who wanted to abolish the concept of work (not necessarily labor) as we know it, but it became popular and reddit’s pro-capitalism pro-liberal groupthink got to the point where their founding(?) moderator’s views, congruent with the resources in the sidebar, caused massive outrage. Like you said, dissent and discussion is overwhelmed by funny memes and fulfilling anecdotes.
I agree that pointing out the problem is far easier than finding a good solution. I don’t think activity-based sorting is much better since, as you said, that just tends to promote the most outrageous content. Facebook and Twitter-like platforms suffer from that issue more than Reddit-like platforms do. In short, I don’t have a good solution and I acknowledge the benefits of the upvote/downvote system (such as outrageous and irrelevant content being filtered out by the community without the need for as much active moderation), but it is a poor tool for fostering civil, ideologically diverse communities.
|
OPCFW_CODE
|
- How do I open another mailbox in Outlook 365?
- How many shared mailboxes can I have in Office 365?
- Do Office 365 shared mailboxes have calendars?
- How do I open a shared folder?
- How do I view a shared folder in Outlook 365?
- How do I open public folders in Outlook 365?
- How do I give permission to a shared mailbox in Office 365?
- How do shared mailboxes work in Office 365?
- How many shared mailboxes can you have in Outlook?
- How do I manage a shared mailbox?
- How do I open a shared folder in Outlook?
- How do I add a shared folder in Outlook app?
- What does download shared folders do in Outlook?
- Can you send email from a shared mailbox in Office 365?
- Do shared mailboxes have owners?
How do I open another mailbox in Outlook 365?
In the Navigation bar on the top of the Outlook Web App screen, click on your name.
A drop-down list will appear.
Click Open another mailbox.
Type the email address of the other mailbox that you want to open, and click Open..
How many shared mailboxes can I have in Office 365?
Storage limitsFeatureMicrosoft 365 Business BasicOffice 365 Enterprise E3User mailboxes50 GB100 GBArchive mailboxes7,850 GBUnlimited1Shared mailboxes1050 GB250/100 GB2,9Resource mailboxes50 GB350 GB3,94 more rows•Aug 31, 2020
Do Office 365 shared mailboxes have calendars?
Shared Office 365 mailboxes are mailboxes that can have more than one user. No separate username is needed for using them; instead, the user logs in using their own username. A shared mailbox also includes a shared calendar.
How do I open a shared folder?
Right click on the Computer icon on the desktop. From the drop down list, choose Map Network Drive. Pick a drive letter that you want to use to access the shared folder and then type in the UNC path to the folder. UNC path is just a special format for pointing to a folder on another computer.
How do I view a shared folder in Outlook 365?
Accessing another person’s folder(s) using OWALogin to OWA.Click Mail to open your mail folders.Right click on your name in the folder list.Choose Add Shared Folder.Type the name of the person whose folder you wish to open and click Add.The folder will appear at the bottom of your folder list.
How do I open public folders in Outlook 365?
In order to access Public Folders in OWA, perform the following steps:Right-click Favorites and click add public folder.Expand All Public Folders, select the folder and click Add.
How do I give permission to a shared mailbox in Office 365?
In the admin center, go to the Users > Active users page. Select the user you want, expand Mail Settings, and then select Edit next to Mailbox permissions. Next to Read and manage, select Edit. Select Add permissions, then choose the name of the user or users that you want to allow to read email from this mailbox.
How do shared mailboxes work in Office 365?
A shared mailbox in office 365 is:Free and do not require a license, but every user that accesses the Shared Mailbox must be assigned an Office 365 license.Cannot be accessed by users with Exchange Online Kiosk license.Can be used to store emails sent to and received by the Shared Mailbox.More items…
How many shared mailboxes can you have in Outlook?
A computer that has slower hardware, a large mailbox, and slow network connection may not be able to open more than five shared folders or mailboxes. However, a faster computer that has a smaller mailbox and a fast network connection may be able to open 10 or more shared folders or mailboxes.
How do I manage a shared mailbox?
4 Best Practices to Manage a Team Shared MailboxCreate a Tagging System.Set Up Distinct Folders.Use Your Filters.Don’t Try to do Everything Alone.
How do I open a shared folder in Outlook?
To open other folders follow these steps:Go to File > Info > Account Settings > Account Settings.Double-click your email address > click More Settings > Advanced tab > click Add. … Enter user’s name > click OK > select user if several are found > click OK > Next > Finish.
How do I add a shared folder in Outlook app?
Add a shared mailbox to Outlook mobileSign in to your primary account in Outlook for iOS or Android.Tap the Add Account button in the left navigation pane, then tap Add a Shared Mailbox.If you have multiple accounts in Outlook Mobile, select the account that has permissions to access the shared mailbox.
What does download shared folders do in Outlook?
If Outlook is configured to download shared folders, the contents of the shared folders are stored in your local Offline Outlook Data (. ost) file. If the shared folders contain many items or large attachments, your . ost file size may grow significantly.
Can you send email from a shared mailbox in Office 365?
Send mail from the shared mailbox Click From in the message, and change to the shared email address. If you don’t see your shared email address, choose Other email address and then type in the shared email address. Choose OK. Finish typing your message and then choose Send.
Do shared mailboxes have owners?
Rights to the shared mailbox are inherited from the group. Group members are users of the mailbox. Owners of the group are able to add and delete users from the shared mailbox.
|
OPCFW_CODE
|
UNDERFIRE RADIO IS BACK!!!!
Posted 15 March 2006 - 10:13 PM
So after a long wait, we finally got something recording. Thanks the the genious' at 1and1 we don't have our domain anymore, so please change your bookmarks to this.
You can add us to your podcast with this.
Also, to download our files, we are hosting them at archive.org for the time being so please download Episode 1 at this address.
P.S. For those of you who don't remember us, or weren't around when we were, Underfire Radio is a production made by Bizurke and myself that is all about technology. It's not a hacking or phreaking show, just two guys chatting about technology in daily life. So check it out!
Posted 15 March 2006 - 10:18 PM
Posted 15 March 2006 - 10:21 PM
Posted 15 March 2006 - 10:23 PM
Posted 15 March 2006 - 10:24 PM
Posted 19 March 2006 - 03:22 AM
Posted 19 March 2006 - 09:14 AM
Edited by TelcoBob, 19 March 2006 - 10:39 AM.
Posted 19 March 2006 - 02:37 PM
Posted 20 March 2006 - 02:35 PM
Hurrah! UnderFire Radio makes me erect! (In a non-gay way. Totally non-gay.)
Pronunciation Key (r-n, r-)
n. pl. i·ro·nies
1. The use of words to express something different from and often opposite to their literal meaning.
Edited by Scheda, 20 March 2006 - 02:35 PM.
Posted 20 March 2006 - 03:06 PM
Etymology: Middle English diffinicioun, from Middle French definition, from Latin definition-, definitio, from definire
1 : an act of determining; specifically : the formal proclamation of a Roman Catholic dogma
2 a : a statement expressing the essential nature of something b : a statement of the meaning of a word or word group or a sign or symbol <dictionary definitions> c : a product of defining
3 : the action or process of defining
4 a : the action or the power of describing, explaining, or making definite and clear <the definition of a telescope> <her comic genius is beyond definition> b (1) : clarity of visual presentation : distinctness of outline or detail <improve the definition of an image> (2) : clarity especially of musical sound in reproduction c : sharp demarcation of outlines or limits <a jacket with distinct waist definition>
Posted 21 March 2006 - 10:34 PM
Posted 26 March 2006 - 06:33 PM
Posted 26 March 2006 - 10:47 PM
Posted 29 March 2006 - 07:58 PM
I was young and dumb.
Posted 30 March 2006 - 09:19 PM
Posted 07 April 2006 - 11:19 PM
Posted 08 April 2006 - 12:16 AM
BinRev is hosted by the great people at Lunarpages!
|
OPCFW_CODE
|
UC Davis PHEV
Electric vehicle budget calculator
With the UC Davis Plug-In Hybrid Electric Vehicle Research Center (PHEV), I helped build a web application for users to explore their daily commute expenses with electric and hybrid vehicles and compare their results with different vehicles.
About the project
The Electric Vehicle Explorer is an application built for a consumer interested in purchasing an electric vehicle. The application pulls data about all vehicles and allows the user to compare operating costs of 4 different vehicles at a time. A user can map out their commute and provide inputs on their charging options to understand whether a vehicle will fit into their lifestyle and what the costs would be. An unreleased “trip” feature would allow the user to build various multi point trips to explore their options outside of their daily commute. They might use this for things within their regular routine or to consider whether an electric vehicle would allow them to take a road trip.
About the team
This project was part of a part-time role I held with the UC Davis Plug-in Hybrid Electric Vehicle Research center. My role was the designer and front-developer, and I worked closely with a back-end developer and a product manager.
About the work
When I joined this project, work was already underway and a basic layout to the application had been set. We had no real process in place, and really we just tackled different features as we went. My back-end counterpart would work through the implementation of a feature, and then I would work through the design, all in code. While I’ve noted what I would change about this project later, it’s important to keep in mind the ad-hoc method with which we were building it given that none of us had really worked on this sort of project before. It’s clear now how much our process would have benefitted from a more defined UX process outside of the code, and more time spent focused on usability.
Going back to the drawing board
One of my first steps after onboarding was to pull my team away from our laptops and in front of a white board to rethink the layout and the information we provided. We settled on this layout, which now included a bar at the bottom of the page to convey how much charge a user had used towards their car’s maximum.
Building out the user inputs
A major challenge of this project was working through the different input a user could provide and the information we were giving back to them. Given the large number of inputs we were asking for, we kept the information we were giving to what we considered “enough”. The screen above show the full amount of information a user received after giving us information about their trip.
The settings menu allows users to change gas and electricity price, though we set generic default based on live data we pulled. Throughout the project I tried to keep instructions as clear as possible, but there are many cases where steps could have been clearer.
The car manager allowed a user to change the cars they were comparing. The program starts with 4 cars already selected. Our original thought was that we wanted to get users to see some data as soon as possible, and then start to explore the settings and features. In retrospect, I think it would have been interested to test more of a “flow” onboarding experience that walks the user through some of the initial input we need.
Breaking out the steps
We had attempted this originally by starting with a “Home” screen that provide no information, and simply just asks the user for their home location. The benefit of this is that we limited the information we asked for to start. But this also made it awkward to later change the home location, because the user had to navigate back to this screen via the top menu.
Creating an introduction and help tool
We also created an “introduction” modal with slides on how to use the tool. Various parts of the tool included (?) icons that would open up a specific slide in this introduction and explain how to use that area.
What would I do differently?
Given that this was one of my first UX design projects, I’ve got a laundry list of things I would do differently. Most importantly, I would have forced myself to step away from the code more and work on sketching designs on paper or through mockups. I think this would have made a significant improvement to the quality of the work in the end because it would have forced me to think through layout and usability before I was deep in HTML and CSS. This also would have allowed me to try out multiple versions of a design and really way pro’s and con’s.
If the project had more resources, I also would have done more formal user testing than the ad-hoc testing I had done on my own. I think this project would have made a great candidate from remote, unmonitored usability testing. Our biggest concern was the amount of settings and options we were offering to the user, and that was the main reason the “Trip builder” screen didn’t make it in MVP. Unmoderated testing would have given us some low-cost, quick feedback on how well user’s could understand the information we were giving.
Overall, I think the main thing I would change would be to age myself 10 years and try this project again! There’s so much I’ve learned that I think could really make this an awesome and easy to use tool that can make a major impact on how we evaluate an electric vehicle purchase.
Explore other projects
Create a pattern library for an online software marketing platform to improve the quality of the product and reduce friction between designers and engineers.
Engineering training tools
Conduct interviews to gain a better understanding of how Google engineers use the internal resources available to them and redesign those tools to better fit their needs.
|
OPCFW_CODE
|
Tuples and strings
Python has two different more list-like data types that are very important to understand.A tuple is a structure that is like a list, but is not mutable. You can create fresh tuples, but you cannot modify the contents of a tuple or add components to it. A tuple is typically given like a list, but with round brackets instead of square ones:
>>> a = (1, 2, 3)
In fact, it is the commas and not the parentheses that matter here. So, you may write
>>> a = 1, 2, 3
(1, 2, 3)
and still get a tuple. The only tricky thing about tuples is making a tuple with a one component. We could try
>>> a = (1)
but it does not work, because in the expression (1) the parentheses are playing the standard grouping role (and, in fact, because parentheses do not create tuples). So, to create a tuple with a single component, we have to use a comma:
>>> a = 1,
Tuples will be very important in case where we are using structured objects as 'keys', that is, to index into another data structure, and where inconsistencies would occur if those keys could be changed.
An important special kind of tuple is a character. A string may almost be thought of as a tuple of characters. The information of what constitutes a character and how they are encoded is hard, because modern string sets add characters from nearly all the world's languages. We will take characters we may type easily on our keyboards. In Python, you may type a string with either single or double quotes: 'abc' or "abc". You can choose parts of it as you have a list:
>>> s = 'abc'
The strange thing about this is that s is a string, and because Python has no common data type to show a single character, s is also a string.
We will frequently use + to concatenate two existing strings to make a new one:
>>> to = 'Jody'
>>> fromP = 'Robin'
>>> letter = 'Dear ' + to + ",\n It's over.\n" + fromP
>>> print letter
As well as using + to concatenate strings, this code explains several other small but important points:
- You may put a single quote under a string that is delimited by double-quote characters (and vice versa).
- If you want a new line in your string, you can write \n. Or, if you delimit your string with a triple quote, it can take over multiple lines.
- The print statement can be used to print out results in your program.
- Python, like most other programming languages, has some reserved keywords that have special function and cannot be used as variables. In that type, we needed to use from, but that has a unique meaning to Python, so we needed fromP instead.
|
OPCFW_CODE
|
Data Liquidity and Systems Interoperability
How to automate alignment and management of complex heterogenous data and systems?
Data integration and analytics is a bottleneck for solving our greatest challenges from doing science and creating general artificial intelligence, to everything in between. The demand for integrated data is indicated by the number of startups that focus on nothing more than collecting lists of well-aligned data-sets of interest and monetizing specialized queries. Well-aligned quality datasets is the gold-mine for endeavors involving inherently heterogeneous data, such as for drug discovery, complex designs, sociological research, and so on. Presence of multitude of data formats and standards makes any simple question, such as "get me a list of all world's dogs" - an insurmountable quest for yet another startup focusing on that specific domain. The existing solutions, such as linked ontology-aware data formats are insufficiently flexible and rich to be convenient for defining records with multi-vocabulary fields from arbitrary ad-hoc vocabularies, and lack support for definitions of value types, callable object interfaces and modification permissions, enabling objects to retain properties even after decoupling from the data management systems that originate them.
Current widely known solutions (such as Linked Data), are not entirely well suited for the problem, as they require large amounts of data to be serialized in the same format, that never is the case in the ever diversifying world, and there is no standard way to embed schemas, permissions and other context data to data items, necessary to make them reusable in queries.
Combining the RDF-based SPARQL (for alignment) with OAUTH2 (for permissioning) and some and a standard to securely encrypt data about query origin context (such as query origin identity keys, cookies, IP addresses, and definitions of schema versions of resources, where data came from) it may be possible to approach the desired data properties of retaining the ability to reuse data items as objects in the context of arbitrary programming languages, without the need to write custom integrations. However, this seem to have not been done, and there may be better solutions to address the problem.
For example, due to the diversity and complexity of systems on the web (protocols and formats), there may be other (better?) ways to approach the problem, based on plug-and-play philosophy for devices using drivers, allowing to abstract away web resource APIs, and have fully-featured polymorphic interactive data as a shared feature of all programming languages, treating websites and web systems (including decentralized ones) as operating system devices directly available as variables to programming languages.
Regardless of the choice or way of implementation, the data liquidity and systems interoperability seem to remain an important unsolved problem and bottleneck for faster progress in large number of domains of digital activity.
|
OPCFW_CODE
|
Nouvelle version de XBMC, issue des CVS et compilée par T3CH.
Cette compile T3CH est un package barebone (logiquement moins de problèmes pouvant provoquer un blocage). Vous trouverez ci-dessous le changelog depuis la dernière version du 14 novembre :
– 21-11-2005 added: you can now specify where to store playlists. they are now separated in video and music sections.
– 21-11-2005 fixed: delete in my video title view renamed to remove title to avoid confusion. also removed a file-level operation (rename file) from the title view context menu.
– 21-11-2005 fixed: somehow a hack for dvd menu’s was removed from the dvdplayer. Added it back again
– 21-11-2005 added: New multiimage control. A mini slideshow control.
– 19-11-2005 added: you can now use relative paths in combination with $HOME in xboxmediacenter.xml. hi to jjsmither. WARNING: REMEMBER THAT $HOME IS Q:\ BY DEFAULT, NOT THE ACTUAL XBMC DIRECTORY PATH.
– 18-11-2005 updated: Spanish language file (Thnx to jose_t)
– 18-11-2005 updated: Norwegian language file (Thnx to vnm)
– 18-11-2005 updated: Korean language file (Thnx to AkoXko)
– 18-11-2005 updated: Italian language file (Thnx to kotix)
– 18-11-2005 updated: German language file
– 18-11-2005 updated: French language file (Thnx to flymaster)
– 18-11-2005 updated: Finnish language file (Thnx to jutski)
– 18-11-2005 updated: Danish language file (Thnx to hugener)
– 18-11-2005 updated: Chinese (Traditional) language file (Thnx to omenpica)
– 18-11-2005 cleanup: Removed some code duplication of the the delete/rename code in Pictures/Video/Music.
– 17-11-2005 fixed: Location of the video preview window was incorrectly calibrated.
– 16-11-2005 changed: sf bug #1301380 System info – Wrong HDD Key info! Removed until the detection is rewrote! [GeminiServer]
– 16-11-2005 fixed: sf bug #1348614: ftp client: opening/copying folders with .”dot” should work again! [GeminiServer]
– 16-11-2005 added: sf patch #1348694 F and G partition support to FEH [GeminiServer]
– 15-11-2005 added: Skin Theme Support: [GeminiServer]
Skin Themes, simply loading different Texture.xpr files from the current selected skin! To create new Themes, just add. a new MyThemaRed.xpr to the \Skin\SkinName\media\*.*,which contained Theme referenced pictures!
You can also use more Themes and the shared files [picutes] can be in root of \media, which will be used if they are not in the defined theme!
also new is to define the default used Theme in skin.xml textures, here you can define the simple XPR name! e.g. “My Theme Red.xpr” theme name is “My Theme Red”!
All themes will be detected Automaticly, and can be selected true Settings – Appearance – Skin Theme! If there is no Theme defined, the default’s will be used!
– 15-11-2005 added: sf patch #1350866 modified playselected() python function
– 15-11-2005 added: sf patch #1350867 new position() python function
|
OPCFW_CODE
|
What is Buffer overflow?
Buffer overflow, in the presence of a buffer overflow security vulnerabilities in the computer, the attacker can exceed the normal length of the number of characters to fill a domain, usually the memory address. In some cases, these excess characters can be run as “executable” code. So that an attacker can not be bound by security measures to control the attacked computer. It is one of the most common means of attack, the worm on the operating system in high-risk vulnerabilities overflow speed and large-scale propagation are using this technology. Buffer overflow attacks in theory can be used to attack any defective imperfect procedures, including anti-virus software , firewalls and other security products, as well as attacks on the banks of the attack program.
In unix systems, to gain root privileges via a buffer overflow it is quite common to use a hacking technique . In fact, this is a hacker in the system already has a basic local account of the preferred mode of attack. It is also widely used in long-range attacks, by daemon process stack overflow to get rootshell remote technology, there are already many examples.
In the windows system, there is also the problem of buffer overflow. Moreover, with the popularity of internet internet service program, win series platform more and more low-level win program becomes fatal on your system, because they are the same will happen remote stack overflow . Moreover, since the system users and administrators win a general lack of awareness of security, a win on the system stack overflow , if the malicious use, will cause the entire machine to be hackers controlled, which may cause the entire local area network fall into the hands of hackers. Microsoft’s popular product iis server4.0 was found in a known as “illegal htr request” defects. According to Microsoft said the flaw in certain circumstances lead to arbitrary code can be run on the server side. But found the loopholes in the Internet security company eeye ceo firas bushnaq words, this is only the tip of the iceberg. bushnaq said that hackers could exploit to the iis server complete control, in fact, many e-commerce sites is precisely based on this system.
Let us look at the principle of buffer overflow. As everyone knows, c language without array bounds checking, using c language in many applications, it is assumed the size of the buffer is sufficient, certainly greater than its capacity to copy the string length. But the fact is not always the case, when the program error, or malicious user deliberately into a long string, there are many unexpected things happen over that portion of the character will be covered with an array of other neighboring variable Of the space, so that the variable appears unpredictable value. If it happens, the array and subroutine return address when near, it is possible due to the part of the string exceeds covers subroutine return address, so that the subroutine is finished return turned to another unpredictable address to make the program There was an error in the execution flow. Even, because the application does not access the process address space range of addresses, leaving the offending process failures occur. This error is often committed in the programming.
Use a buffer overflow while attempting to destroy or illegally entering the system program usually consists of the following components:
- Prepare for some brings up a shell of machine code string formation, in the following we will call it shellcode.
- Apply a buffer, and the machine code fill in the lower end of the buffer.
- Estimation machine code in the stack may start position and the write end position of the buffer. This initial position is also a parameter we need to call repeatedly when we execute this program.
- The buffer as a buffer overflow system error entry procedures, and the implementation of the wrong procedures.
Through the above analysis and examples, we can see the buffer overflow on the security of the system a huge threat. In unix systems, the use of a class of well-written procedure, and use suid programs that exist such a mistake can easily obtain the system superuser privileges. When a service program to provide services in the port, buffer overflow programs can easily turn off this service, making the system service paralyzed in a certain period of time, serious downtime may make the system immediately, thus becomes a denial of service attack . This error is not only the error of the programmer, the system itself in the realization of this error occurs more. Today, buffer overflow errors are continually from unix, windows, router , gateway to be found and other network devices, and constitute a larger number of security threats to the system maximum degree category.
Buffer overflow is the code inherent vulnerabilities , in addition to pay attention to the development phase to write correct code than for the user, the general prevention of errors
- Shut down a port or service. Administrators should be aware of what is installed on their systems, and which services are running
- Install the software vendor patches, loopholes a release, a large vendor will provide timely patches
- In firewall filter traffic on specific, internal staff can not prevent overflow attacks
- Check your own key service program to see if there is a terrible vulnerability
- Run the software with the minimum permissions required
|
OPCFW_CODE
|
Professionals and students of the World Wide Web (WWW) industry will benefit greatly from this book. For beginners, the author demystifies many technical terms in Web development well beyond the scope of Ruby on Rails. For advanced software developers, he presents the beauty of the Ruby language and unveils the strength of the Rails framework. There is also a wealth of online material, video lessons, links, tips, and tricks to help readers go beyond the Rails tutorial.
The book is remarkably written, with a good balance between presenting the concepts behind Web development and interacting with the reader. The author takes great care with the pronunciation and origin of words, and even greater care introducing the basics of programming. Both are very important for a hands-on approach to learning, which the author masterfully accomplishes in this tutorial.
The tutorial starts with a broad overview of concepts in chapter 1, explaining three major points of software production: the development environment, version control, and deployment. Each topic is supplemented with notable up-to-date open-source tools. In chapter 2, the author constructs a complete sample demonstration application, using many pointers to topics discussed later in the book, which might confuse or discourage the reader. But to those readers who hold tight, the software architecture in the Rails framework becomes clear over the remainder of the book. I especially appreciate the way the author reassures the reader of the vast complexity of Ruby on Rails, admitting that he himself does not know everything that all the Rails classes can do.
The subsequent chapters present guidelines for learning Ruby on Rails with three foundational functions: test-driven development, implementation, and testing. In chapter 3, the author shows how the testing function can be automated. Chapter 4 is the only one dedicated to the Ruby language; it moves at a very fast pace, so readers mainly interested in Ruby are advised to seek other more specific technical books on the subject. Chapter 5 continues to explore the relationship between the Rails asset pipeline and the elements of cascading style sheets (CSS) and Hypertext Markup Language 5 (HTML5). It enters smoothly into style sheets and web design to the point where the mockup pages are actually visually bad compared to the final dynamic pages produced by the tutorial. The fundamental notions taught in chapters 4 and 5 pave the way for the last half of the book. Chapter 6 begins with directions on how to undo mistakes made during the tutorial, using the Rails sandbox console and the rollback command from Rake, a software task management tool related to Ruby. Security concerns are addressed in chapters 7 to 9, including user records and sign-up, sign-in, and sign-out pages. Particularly in chapter 9, the user is constantly reminded of possible attacks from malicious users and ways developers can prevent such attacks.
The book finishes with a section on a microblog functionality, wrapping up the wide context presented since the first chapter. Chapter 10 explains database modeling with associations between users and micropost tables. A more complex data model abstraction is given in chapter 11, neatly implementing the concept of users following other users. This concludes the tutorial.
In summary, the book provides substantial knowledge on Web development using Ruby on Rails, a significant framework to have in your developer’s toolbox. Almost every chapter ends with useful exercises, which, together with code listings and command snippets, provide professionals and students with the essentials of Rails for industry-level Web development.
|
OPCFW_CODE
|
It's my first attempt for log shipping. i have restore the backup of Primary Database(Production db) to the development server in Standby/Read only mode.
I have checked error logs and job activity monitor but there are no errors but the changes doesn't seems to be replicated on Secondary database.
Production server and development server are two different mechines.
firstly im running LSRestore Job then after 15 minutes LSCopy job. I checked shared network folder which has .trn (log files), which is fine as well
My question is - why changes i cannot see the changes on Secondary server. if i am making any changes on to the Primary DB?
Is there anything I am missing?
please guide me
Thanks in advance
LS Restore job doesn't throw any errors if there are logs to apply.
Normal order of the jobs would be : LSBackup job on the Production Server, then LSCopy job, then LSRestore Job.
Take a look at the last 2 or 3 steps of the last jobrun of the LSRestore Job. You can find some information there. Another Problem could be, that there aren't any log backups you can apply on your secondary depending on the point in time you've taken the full backup of your primary database. If the backup is too old, means, you don't have a complete log-backup chain beginning with the endtime of your full backup, the LSRestore job won't be able to apply any logs, but you should find this information in the LSRestore Jobhistory.
- Edited by MWagner1985 Thursday, May 10, 2012 8:58 AM
Thanks for your prompt response
I just ran the LSBack job first and it created Log file in shared folder then LSCopy and at the end LSRetore job it ran successfully all of them ran successfully.
However when checked the job log of LSRestore it said could not find a log backup file that could be applied to seconday database.
does it mean anything?
from when is your full database backup, and what's your retention time of your transaction log backup ( you can see that with a rightclick on your database -> tasks -> ship transaction logs -> backup settings)?
as stated before normally this error occurs when you have a broken backup chain means your full backup is too old, and you don't have all transaction log backups you need.
Can you confirm that lscopy is copying the transactionlog backup files on your local secondary server?
Check your security settings to ensure that the jobs are running under an account which has access to the file share where the backups on the secondary are located, and make sure the LSN chain hasn't been disrupted by having backups occur outside the log-shipping plan.
If those are all correct, I know of one other situation which happened to me, and it was because there were no new transactions to restore from the t-logs since the initial restore of log-shipping. If you are able to manually restore, but getting the "no pages restored" message, than this is most likely, especially as you have said you set-up log-shipping on a test server where there probably aren't any active transactions occurring often. Once I initiated a transaction on the database being log-shipped, and waited for the t-log to be generated and copied over to the secondary server, I noticed that my Restore jobs suddenly started working from that point onward. I don't know what the root cause is, but I suspect that if there is nothing further to restore within the t-logs since the last restore, they get skipped. Of course, if there aren't any new transactions, then all of them would be skipped and you would get the message "Could not find a log backup file that could be applied to secondary database" as a result.
Hope this helps!
Pls check out the LSN of Production server db and Secondary server db....they wont be in sequence.
Nxt verfy tht copy job is executing or not in there are lof file been copied in shared folder of secondary server.
Regards, Ashish ----------------------------------------------------------- Please mark answered if I've answered your question and vote for it as helpful to help other user's find a solution quicker..
Just an update, it happened to me again as well but this time in a production environment where I knew there was no activity on a particular database since the last restore. This time I ran UPDATE STATISTICS dbo.my_table on a random small table in the problem db on the primary server just to get a write to the log file, and presto! The database restored upon the t-log being generated and copied over to the secondary server, and showed as being fully synchronized from that point on.
Of course this could be ANY transaction that causes a write to the log. I just chose the most innocuous one I could think of.
I had started another thread on this weird behaviour here, but so far, I haven't got an answer about it. At least now I have a work-around.
- Edited by Diane Sithoo Thursday, January 17, 2013 4:48 PM
|
OPCFW_CODE
|
USER GUIDE 9
3D Printing Glossary
The most commonly used terms related to 3D printing and 3D printers:
- ADDITIVE MANUFACTURING: The process by which digital design data is used to build a three-dimensional object in layers by depositing material.
- BRIM: Additional material printed concentrically around your object to help hold edges down and to prevent warping.
- BUILD PLATFORM: The surface where the object is 3D printed. Also called build plate or bed.
- BUILD VOLUME: Measured in length, width and height; this is the maximum size of an object that a 3D Printer is able to print.
- CARRIAGE: Moving assembly that travels along an axis.
- CLOG CLEANER: A strip of nylon filament used to clean the extruder nozzle.
- COMPUTER AIDED DESIGN [CAD]: Software used to design a three-dimensional object.
- DOWNUP DIAL: Located at the top of the Buildini’s Z-axis, the adjustment wheel manually moves the extruder up and down.
- DRIVE GEAR: Located inside the extruder, this gear is used to pull filament into the hot end.
- EXTRUDER: Part of a 3D Printer that expels plastic to deposit it in successive layers within 3D printing.
- G-CODE: Job file for 3D printing an object, contains a set of directions which tells the printer how to print the object.
- GUIDE TUBE: Hollow cylinder used to guide the filament accurately into the extruder.
- FILAMENT: Thread-like plastic used to 3D print objects.
- FIRMWARE: A class of software that controls the functions of various hardware.
- FUSED FILAMENT FABRICATION [FFF]: 3D printing process that melts plastic filament through a heated nozzle and deposits the material in layers.
- HOT END: Part of the extruder that heats filament to its melting point.
- INFILL: Internal support structure of FFF printed objects; the higher the percentage of infill, the denser the object.
- LAYER HEIGHT: Height of the horizontally printed layers of a 3D printed object, typically measured in millimeters or microns.
- LEVELING CARD: 0.1mm thick card included with your Buildini™ to set the distance between the build platform and the nozzle when leveling.
- NOZZLE: Deposits melted filament onto the build area.
- POLYLACTIC ACID [PLA]: Thermoplastic polyester made from starch, such as soybeans, potatoes, corn, or other crops.
- RAFT: Removable and discarded latticework of filament underneath an object to help with warping and bed adhesion.
- SD CARD: Secure digital (SD) memory card, a data storage device used for digital information.
- SKIRT: Outline that helps clean the nozzle head and establish a smooth flow of filament. Unlike a brim, a skirt is not connected to the print.
- SLICER: Software that converts STL files or other 3D model formats into G-code files for 3D printing.
- SLICING: Process of converting a 3D model into G-code files containing fabrication instructions for a 3D printer.
- SPIN-N-SELECT KNOB: Adjustment dial used to navigate through the Buildini’s LCD screen menu.
- STL FILE: Digital file format for a 3D model, imported into a slicer to convert it to G-code.
- SUPPORT: Removable and discarded 3D printed material used to successfully fabricate overhangs, bridges, and negative space.
- X-AXIS: Mechanical assembly that creates the left to right motion of the build platform.
- Y-AXIS: Mechanical assembly that creates the front to back motion of the extruder relative to the build platform.
- Z-AXIS: Mechanical assembly that creates the up and down motion of the extruder relative to the build platform.
Learn something? Share it:
|
OPCFW_CODE
|
There are 11 repositories under stl topic.
📚 C/C++ 技术面试基础知识总结,包括语言、程序库、数据结构、算法、系统、网络、链接装载库等知识及面试经验、招聘、内推等信息。This repository is a summary of the basic knowledge of recruiting job seekers and beginners in the direction of C/C++ technology, including language, program library, data structure, algorithm, system, network, link loading library, interview experience, recruitment, recommendation, etc.
The official Open-Asset-Importer-Library Repository. Loads 40+ 3D-file-formats into one unified and clean data structure.
EASTL stands for Electronic Arts Standard Template Library. It is an extensive and robust implementation that has an emphasis on high performance.
Open Source toolpath generator for 3D printers
Functional Programming Library for C++. Write concise and readable C++ code.
A python parametric CAD scripting framework based on OCCT
lightweight hypervisor SDK written in C++ with support for Windows, Linux and UEFI
30 Seconds of C++ (STL in C++). Read More about 30C++ here 👉
Android OpenGL 2.0 application to view 3D models. Published on Play Store
A solution to visualize and explore 3D models in your browser.
2D/3D geometry toolkit for Clojure/Clojurescript
3D printed DSLR tracking mount
Linux wrapper tool for use with the Steam client for custom launch options and 3rd party programs
stdgpu: Efficient STL-like Data Structures on the GPU
Data structure and algorithm library for go, designed to provide functions similar to C++ STL
AMI Medical Imaging (AMI) JS ToolKit
multi-platform bittorrent client
TypeScript-STL (Standard Template Library, migrated from C++)
Simple library to make working with STL files (and 3D objects in general) fast and easy.
3D CAD viewer and converter based on Qt + OpenCascade
C++ STL Cheat Sheets.
CadQuery GUI editor based on PyQT
Manipulate subtitles in GO (.srt, .ssa/.ass, .stl, .ttml, .vtt (webvtt), teletext, etc.)
The Belfry OpenScad Library - A library of tools, shapes, and helpers to make OpenScad easier to use.
Rapid YAML - a library to parse and emit YAML, and do it fast.
A simple CAD package using signed distance functions
This repository consists of data helpful for ACM ICPC programming contest, in general competitive programming.
A command line tool to transform a DICOM volume into a 3d surface mesh (obj, stl or ply). Several mesh processing routines can be enabled, such as mesh reduction, smoothing or cleaning. Works on Linux, OSX and Windows.
SGI STL source code analysis and note from 《STL源码剖析》 by 侯捷(包含电子书、源码注释及测试代码)
C++ STL in the Windows Kernel with C++ Exception Support
An STL and iostream implementation based on uClibc++ that supports my CS-11M class.
|
OPCFW_CODE
|
R: Union Spatial Polygons won't remove separation line completely
Story and data:
I'm doing an internship at a farm in Estonia and want to print out a site plan (sketch) of their fields. However, the shapefile of the fields I get from the Estonian Ministery for Agriculture (https://kls.pria.ee/kaart -> drop-down menu: Põllumassiivid -> Taotlejanimi = "Pahkla Camphilli Küla Farmi OÜ" -> in the search results: Lae andmed alla -> ESRI SHP | I'm sorry, but I don't know how to provide the data elsehow, since the download links there aren't stable) has the fields divided as they were in a certain year:
But the fragmentation is almost every year slightly different, so I want to have a minimal fragmentation. Therefore I want to merge some of the polygons.
What I do:
library(rgdal)
library(rgeos)
# Read in data:
fields <- readOGR("PCK_fields.shp")
# Add ID-column for merging:
fields$põllunumbrid <- character(length(fields$xy_id))
fields$põllunumbrid[grepl("55155929528", as.character(fields$xy_id)) | grepl("55155926080", as.character(fields$xy_id))] <- "1"
fields$põllunumbrid[grepl("55155918524", as.character(fields$xy_id)) | grepl("55155908713", as.character(fields$xy_id)) |
grepl("55155916421", as.character(fields$xy_id))] <- "2"
fields$põllunumbrid[grepl("55055979669", as.character(fields$xy_id))] <- "3"
fields$põllunumbrid[grepl("55155904072", as.character(fields$xy_id))] <- "4"
fields$põllunumbrid[grepl("55055978792", as.character(fields$xy_id)) | grepl("55055958459", as.character(fields$xy_id))] <- "5"
fields$põllunumbrid[grepl("55055959045", as.character(fields$xy_id))] <- "6A"
fields$põllunumbrid[grepl("55056021718", as.character(fields$xy_id))] <- "6B"
fields$põllunumbrid[grepl("55056047115", as.character(fields$xy_id))] <- "7A"
fields$põllunumbrid[grepl("55056047987", as.character(fields$xy_id))] <- "7B"
fields$põllunumbrid[grepl("55055982185", as.character(fields$xy_id))] <- "8A"
fields$põllunumbrid[grepl("55055970417", as.character(fields$xy_id))] <- "8B"
fields$põllunumbrid[grepl("55155910853", as.character(fields$xy_id))] <- "9"
fields$põllunumbrid[grepl("55055956707", as.character(fields$xy_id))] <- "10"
fields$põllunumbrid[grepl("55055924834", as.character(fields$xy_id))] <- "11"
fields$põllunumbrid[grepl("54955995000", as.character(fields$xy_id))] <- "12"
fields$põllunumbrid[grepl("55055997994", as.character(fields$xy_id))] <- "Aed"
# Perform Union
fields1 <- gUnaryUnion(fields, id = fields$põllunumbrid)
# or
#library(maptools)
#fields1 <- unionSpatialPolygons(fields, fields$põllunumbrid)
What I get:
(maptools::unionSpatialPolygons() and QGIS' dissolve give exactly the same results)
Some dividing lines seem to be removed unproperly. Interestingly this depends on the coordinate system/ projection. If I first reproject the polygons to WGS84 via
fields <- spTransform(fields, CRS("+init=epsg:4326"))
Then the result contains a different line in the problematic area that almost separates two of the former polygons:
What I want:
But I want these polygon borders to vanish completely!
What I tried aswell:
Because the problems arise only there where I merge 3 polygons I tried to first merging 2 of them an then the third one in aswell. However, this leaves me with the separating line between the two and the third one completely untouched.
If I try (so far only with QGIS) to turn the polygons into lines, split them at their vertices, and take those edges out that divide the polygons then there is an edge that needs further shortening. Doing that aswell leaves me with the problem that lines to polygons won't produce any output. Going then back to R again using the trick from Kamran Safi to force a conversion back to polygons gives me a lot of polygons with an area of 0, i.e. the plot looks right, but the polygons plotted then are only lines.
Qustion:
Does someone know a workaround that properly removes all of the unneeded polygon borders?
I guess it would raise my chances for an answer tremendously, if it wouldn't be that hard to get the data. So if anyone knows a good method to provide you the small shapefile (52 KB unzipped) I would be happy to hear it.
Not quite an answer, but it worked for me and may help others in my situation aswell: After converting the polygons to lines, splitting the lines up at their vertices (QGIS: Processing -> Tools -> Explode) and deleting the separation lines I could use the QGIS: Processing -> Tools -> Polygonize tool to recreate all but one polygon. This one I drew anew with help of snapping. It's not 100% the original one, so I can't use the data for official purposes any more, but it will do for me.
I guess with all that QGIS stuff that came into it to describe what I did in my despair the question would be placed better in the GIS Stack-Exchange Community now.
Depending of the precision your need, you may want to make a small buffer around your shapes (like 1 meter or so, smaller is the better). It may help with your specific problem but also may cause other problems...
@Bastien this is much better than my solution. I used a buffer of 1 mm which fulfills my needs of accuracy by far.
|
STACK_EXCHANGE
|
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Release Notes for Cisco ASR 901 Series Aggregation Services Router for CiscoIOSRelease15.5(3)S
System Specifications and Memory Details
Determining the Software Version
New Hardware Features in Release 15.5(3)S
New Software Features in Release 15.5(3)S
Modified Software Features in Release 15.5(3)S
Obtaining Documentation and Submitting a Service Request
First Published Date: July 2015
This release notes is for the Cisco ASR 901 Series Aggregation Services Router for Cisco IOS Release 15.5(3)S and contains the following sections:
The Cisco ASR 901 Series Aggregation Services Router is a cell-site access platform specifically designed to aggregate and transport mixed-generation radio access network (RAN) traffic. The router is used at the cell site edge as a part of a 2G, 3G, or 4G RAN.
The Cisco ASR 901 router helps enable a variety of RAN solutions by extending IP connectivity to devices using Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Node Bs using High Speed Packet Access (HSPA) or Long Term Evolution (LTE), base transceiver stations (BTSs) using Enhanced Data Rates for GSM Evolution (EDGE), Code Division Multiple Access (CDMA), CDMA-2000, EVDO, or WiMAX, and other cell-site equipment.
It transparently and efficiently transports cell-site voice, data, and signaling traffic over IP using traditional T1 and E1 circuits, as well as alternative backhaul networks such as Carrier Ethernet and DSL, Ethernet in the First Mile (EFM), and WiMAX. It also supports standards-based Internet Engineering Task Force (IETF) Internet protocols over the RAN transport network, including those standardized at the Third-Generation Partnership Project (3GPP) for IP RAN transport. Custom designed for the cell site, the Cisco ASR 901 router features a small form factor, extended operating temperature, and cell-site DC input voltages.
Table 1 lists the Cisco ASR 901 1G Router model versions.
Table 2 lists the Cisco ASR 901 10G Router model versions.
Note Some of the Cisco ASR 901 models have port based licensing. For more details, see the Licensing chapter in Cisco ASR 901 Series Aggregation Services Router Software Configuration Guide.
Table 3 lists the supported system configurations and memory details for the Cisco ASR 901 router:
Cisco ASR 901 Series Aggregation Services Router TDM version
Cisco ASR 901 Series Aggregation Services Router, Ethernet version
Cisco ASR 901 Series Aggregation Services Router, IPsec enabled Ethernet version
To determine the image and version of Cisco IOS software running on your Cisco ASR 901 router, log in to the router and enter the show version command in the EXEC mode:
The following example shows output from Cisco ASR 901 router that supports normal IOS software.
The following example shows output from Cisco ASR 901 Series Aggregation Services Router, IPsec enabled Ethernet version.
BCM Parity Errors feature detects and recovers from parity errors which may get generated on Cisco ASR901 routers due to environmental conditions.
These parity errors, when get detected, generate following type of logs on the device –
unit 0 <TableName> entry <entryId> parity error
This feature automatically recovers parity error on a subset of the table. Therefore, if parity errors in above format are observed on some table, then please contact Cisco TAC for assistance.
This feature uses the new IP multicast address 184.108.40.206 to send hello packets instead of the multicast address of 220.127.116.11, used by HSRP version 1. This new multicast address allows CGMP leave processing to be enabled at the same time as HSRP.
HSRP version 2 permits an expanded group number range, 0 to 4095, and consequently uses a new MAC address range 0000.0C9F.F000 to 0000.0C9F.FFFF. The expanded group number range was changed to allow the group number to match the VLAN number on sub interfaces. This feature allows the router to advertise and learn millisecond timer values. HSRP version 2 packet includes a 6-byte identifier field that is used to uniquely identify the sender of the message (usually populated with interface MAC address).
For detailed information about this feature, see HSRP Version 2 feature at the following URL:
This feature provides the capability to filter packets at a fine granularity and allows the permission or denial of the packets based on the MAC source and destination addresses. MAC Access Control Lists (ACLs) are ACLs that filter traffic using information in the layer 2 header of each packet. This ability to filter packets in a modular and scalable way is important for both network security and network management.
For more information about this feature, see Layer 2 MAC ACLs feature guide at the following URL:
This feature introduces support for IP Security (IPsec) traffic to travel through Network Address Translation (NAT) or Port Address Translation (PAT) points in the network by addressing many known incompatibilities between NAT and IPsec. This feature encapsulates the IPsec packets in a User Datagram Protocol (UDP) wrapper that allows the packets to travel across NAT devices.
For information about this feature, see NAT Traversal feature guide at the following URL:
Effective from Cisco IOS Release 3.16, the Cisco ASR 901 Router supports PTP debugging over GRE tunnel feature.
PTP Debugging over GRE Tunnel feature enables the transport of PTP debugging information and PTP packets originated by the device through a GRE tunnel.
For more information about this feature, see PTP Debugging over GRE Tunnel feature guide at the following URL:
Effective from Cisco IOS Release 3.16, the Cisco ASR 901 Router supports Smart Licensing feature.
Smart Licensing is software based licensing end-to-end platform that consists of several tools and processes to authorize customers the usage and reporting of the Cisco products.
For more information about this feature, see Smart Licensing feature guide at the following URL:
Table 4 and Table 5 shows the SFP modules supported on the Cisco ASR 901 routers:
Table 4 SFPs Supported on the Cisco ASR 901 1G and 10G Routers for 1G Mode
Table 5 SFPs Supported on the Cisco ASR 901 10G Router for 10G Mode
Note For information on how to configure SFPs, see the Cisco ASR 901 Series Aggregation Services Router Software Configuration Guide.
The Cisco ASR 901 router supports the following MIBs:
Caveats describe unexpected behavior in Cisco IOS software releases. Severity 1 caveats are the most serious caveats, severity 2 caveats are less serious, and severity 3 caveats are the least serious of these three severity levels. Only select severity 3 caveats are listed.
This section contains the following topics:
The Caveats section only includes the bug ID and a short description of the bug. For details on the symptoms, conditions, and workaround for a particular bug you must use the Bug Search Tool.
Use the following link to access the tool: https://tools.cisco.com/bugsearch/search
You will be prompted to log into Cisco.com. After successful login, the Bug Search Tool page opens. Use the Help link in the Bug Search Tool to obtain detailed help.
This section provides information about the open caveats for the Cisco ASR 901 router running Cisco IOS Release 15.5(3)S.
Incorrect P-bit for CPU originated packets once EoMPLS VC on.
ASR901-Operational status for Fan and power module shown as critical.
This section provides information about the resolved caveats for the Cisco ASR 901 router running Cisco IOS Release 15.5(3)S.
" permit any any " configuration should override default "deny any any" FP entry.
Extra FP entry consumed in TCAM after removing all ACE entries from ACL.
Extra IFP created by ACL for 'default deny rule' when configuration explicitly.
TCAM exhaustion for V4 and V6 Multicast needs proper handling.
ASR901 IGMP: Reports received on an interface and interface is default, then 901 is generating error messages.
MLD Snooping: Wrong configuration command, report-suppression instead of listener-msg.
Observing error messages when defaulting router interface receiving IGMP.
Console logs are printed on 1st unauthenticated VTY session.
Exceed action is not working in policer having match cos as 2nd statement.
Missing or illegal IP address messages seen on reloading ASR901.
ASR901: AIS on TDM AC does not trigger L-bit to be set in SAToP CW.
Tracebacks seen at %SYS-2-BADSHARE: Bad refcount in datagram_done.
The following sections describe troubleshooting commands you can use with the router.
Collecting Data for Router Issues
To collect data for reporting router issues, issue the following command:
Collecting Data for ROMMON Issues
To collect data for ROMMON issues, issue the following command while in the EXEC mode:
Note If you contact Cisco support for assistance, we recommend that you provide any crashinfo files stored in flash memory. For more information about crashinfo files, see http://www.cisco.com/en/US/products/hw/routers/ps167/products_tech_note09186a00800a6743.shtml.
Documents related to the Cisco ASR 901 Series Aggregation Services Router include the following:
To access the related documentation on Cisco.com, go to:
For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see What’s New in Cisco Product Documentation at: http://www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html.
Subscribe to What’s New in Cisco Product Documentation, which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service.
|
OPCFW_CODE
|
This PC does not have a CD/DVD drive.
Jump to content
Posted 13 March 2017 - 01:37 PM
Posted 13 March 2017 - 01:43 PM
Posted 13 March 2017 - 05:47 PM
Posted 13 March 2017 - 06:06 PM
This may be due to a bad drive. I would test the drive using UBCD. Create a bootable USB flash drive using Rufus and the iso file. Boot the flash drive and select Parted Magic at the menu. At the desktop select Disk Health and do the short/long tests. If the drive passes then you can use Rufus with an iso file of the OS.
What is the OS that was on the computer? Windows 7 or Windows 8? If Windows 7 is there a COA sticker on the bottom of the computer or in the battery compartment showing a legible key? It's possible you will not need to purchase recovery disks from Acer if you have a legible key on the COA sticker.
If you do not have a legible key then you can create an iso from a CD/DVD using Imgburn. Make sure you use a read speed of no faster than 4x if possible. Also when selecting Destination make sure the file is saved as an iso file.Use Rufus with the iso file to create the bootable flash drive.
Posted 14 March 2017 - 06:00 PM
Posted 14 March 2017 - 06:12 PM
I am also running into issues. None of the mirrors work. This is not normal. In the mean time you can download an iso of your Windows 7 version using this tool.
I will continue to look for a site to download the UBCD iso file.
Edit: Here is a direct download link for UBCD from Techspot.
Edited by JohnC_21, 14 March 2017 - 06:14 PM.
Posted 15 March 2017 - 02:56 PM
Posted 15 March 2017 - 03:07 PM
That was the correct setting, run from RAM. But, I think the download somehow was corrupted.
xz: (stdin) compressed data is corrupt
I was really stupid. I was clicking the links on the download page for UBCD. It's been awhile since I downloaded it. Download the iso again by selecting one of the mirrors and clicking one of the little drive icons.
If you still have issues you can download the iso of Seatools for DOS and use it with Rufus to create the bootable flash drive. When using Rufus select MBR for a partition scheme and leave all boxes as checked.
Edit: This could be a memory problem in which case you can download the iso file of Memtest+86 and let it run for at least 6 passes.
Edited by JohnC_21, 15 March 2017 - 03:32 PM.
Posted 15 March 2017 - 06:01 PM
Edited by cheez, 15 March 2017 - 06:12 PM.
Posted 15 March 2017 - 06:10 PM
Here is a direct link to the UBCD from the mirror in the US.
Edit: Click the green download link here for UBCD
Here is the direct link for the Seatools for DOS file.
I am not sure why Rufus says the file is non-bootable. I downloaded from the above link and Imgburn shows the iso as bootable as you can see from the attached image.
Edit Edit: You are not running Rufus from the flash drive are you? Rufus should be on your desktop when started.
Edited by JohnC_21, 15 March 2017 - 06:15 PM.
Posted 15 March 2017 - 06:20 PM
I got the same message from Rufus regarding MemTest86.
Just tried the new link for SeaTools and got the same message. Is there any other free utility to burn an ISO to a USB?
Posted 15 March 2017 - 06:25 PM
See my edited post. Are you running Rufus from the flash drive or from your desktop?
Another utility is Unetbootin but I am not sure it will work for Windows iso files. Click the disk image button and browse to the iso file. Press okay. Make sure your flash drive is listed in the dropdown box.
Posted 15 March 2017 - 07:27 PM
Posted 15 March 2017 - 07:37 PM
It's possible the latest version of Rufus is not compatible with your computer. You can download an older version here.
If Unetbootin does not work then try v2.4 of Rufus. The below link is for the portable version.
If you still get a fail try a different flash drive.
Edit: I came across another tool should Unetbootin fail. Don't download the beta version. In the second dropdown box you can select UBCD under System Tools. Then browse to the iso of UBCD. Format it FAT32.
Edited by JohnC_21, 15 March 2017 - 08:03 PM.
Posted 16 March 2017 - 09:58 AM
JohnC_21 I am sorry it will be a while before I can continue.
I am having major problems with my own computer. I am at the BSOD section trying to get some help. I will reply just as soon as I can as I am using some one elses PC right now.
My PC is a ,
Running Windows 10, 64 bit
Edited by cheez, 16 March 2017 - 02:20 PM.
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
What's the difference between bundling and zipping a git repo?
When I perform a
git bundle create ../`basename $PWD`.all.gitbundle --all
in a git repository the created bundle file has a size about 4.8MB. When I zip the entire repository folder the resulting file has 26,2MB.
Basically I look for a way to backup the entire repository without loosing any information. But given the archived file size differences I assume git bundle doesn't backup everything or is more efficient than a simple zip.
Could someone please shed light on this?
Every clone is a full copy of the repository. Make a clone of it and you have a backup -- that by the way can get updated extremely easy.
@KingCrunch: a clone isn't strictly speaking a "copy", since the branch structure is different. If you want a real copy, you want to add the --mirror flag to your clone. This will make the clone's branch structure mirror the original's exactly.
Even a mirror is not an exact duplicate of your repository directory. You will still miss any custom settings you might have in your .git/config, you will still miss your stash any work you might have in progress, your stage area -- pretty much everything that's not recorded in the repository.
I don't find git-bundle a good idea for maintaining a backup of your repository. Either create a bare repository and push onto it the refs you wish to track in your backup, or use good old tarballs. The difference between the two is that pushing allows you to back up only selective branches. For example, you might wish to ignore scratch branches in your backups. Zipping your repository will bluntly back up absolutely everything -- including your stash, untracked files, object files and any temporary editor files.
I usually just zip the whole thing. You might run git-clean -fdxn and then git-clean -fdx to carefully wipe out everything that's not stored in your repository. If you really insist on size efficiency when you perform the backup (and you shouldn't; just let Git worry about this), then you can garbage-collect before your backup, and maybe even prune your reflog. But you know, I wouldn't. Storage is cheap these days, and by doing so you merely lose on the backup's value.
The bundle command will package up everything that would normally be
pushed over the wire with a git push
http://progit.org/2010/03/10/bundles.html
This means that the bundle will not have stale objects etc which will be part of your repository. Also, you should not count the actual files in the working directory of your repo, but only the .git with objects and other meta-data as it is these that the bundle will contain and not the files in their original form.
For backup you can look at using git clone --mirror option or just archiving the repo as you have done. A bundle is not a viable backup option for a repo as you will lose config, reflog, stale objects etc.
I think git uses zlib to compress.
zip isn't the greatest archiving format when it comes to size, though. zlib uses delta-compression to further reduce size, which is this (thanks Wikipedia):
Delta encoding is a way of storing or transmitting data in the form of differences between sequential data rather than complete files
That might account for your tiny filesize. I tried a file on the excreted git bundle, and it said that the bundle is just raw data.
I think you're a bit misinformed. zlib compression uses delta encoding as part of how it works (that's basically how all compression works). Git itself stores the full, un-delta'd files as objects in its repo, and then relies on zlib to perform the delta compression (git is also smart enough to re-use deltas when doing incremental packing to speed up operations).
Whoops. Then I guess it just uses zlib.
|
STACK_EXCHANGE
|
[docs] mui.com/docs does not exist
Duplicates
[X] I have searched the existing issues
Latest version
[X] I have tested the latest version
Summary 💡
Now that each of MUI's products has its own documentation, we need some kind of unifying landing page to show users how to make the most of our docs to find what they need.
I'm envisioning a "Welcome" page at mui.com/docs similar to plausible.io/docs. This would be like an Overview of the Overview pages that we've been publishing for each of the products—one or two lines introducing the products, plus details about how to use the docs (search, play with codesandbox demos), how to get help, how to submit issues, etc.
Once we've established this new area of the docs, we can expand it over time to include any documentation that applies across all products. But I think it needs to start with a single landing page.
Examples 🌈
Motivation 🔦
When users click on Docs in the mui.com nav bar, it feels like it should lead to mui.com/docs rather than forcing them to choose one of our products. What if they don't know which one to select? They should be able to easily compare the products in one location without having to read multiple docs in different places.
Benchmark
Introduction docs page https://www.apollographql.com/docs/ Apollo Docs Home
Generic list https://www.twilio.com/docs
Generic list https://developers.google.com/
Generic list https://grafana.com/docs/
Generic list https://docs.konghq.com/
No centralized docs page https://www.hashicorp.com/products/terraform
@danilo-leal we've been talking about something like this for quite some time now, and I think that regardless of whether we have a true "shared" docs space, we at least need a landing page at mui.com/docs to unify all of the docs. What do you think? Also, if we move forward with this, should it look more like a marketing page, or more like a documentation page? I don't have a strong opinion but I'm leaning towards docs style, just because that's what I would expect to find on a page called mui.com/docs. cc @oliviertassinari @michaldudak @gerdadesign
Happy to see this one going on here! I personally enjoy a lot this idea. The marketing pages and the /docs space serve different purposes for me, though. The former is more about showcasing, relying on more visually appealing elements, the core features or value propositions of any given product, whereas the latter is about a bit more in-depth content.
The /docs seem like the most appropriate place to document general conventions that we print to any of our products. A few examples that come to mind:
Composition: I can easily see this as something that Joy, Material, and Base share as a philosophy for component composition. It currently sits only on the Material docs.
API design approach is a very similar case.
Vision: although it seems a bit outdated, I can also see we bringing over a few key handbook contents to this environment.
Understanding MUI packages: definitely something that doesn't make sense being only at the Material UI docs.
Localization: also a general MUI the company general approach.
I'm sure there might be others but this is just to illustrate. Also, something that I think we've mentioned at a meeting is the overlap of this potential general documentation space with the handbook. We should probably use the /docs space for relevant customer/user information and keep the handbook for potential/current employee relevant information.
|
GITHUB_ARCHIVE
|
Bloggers are always looking to create fresh content or update existing one. Quite often, they struggle to maintain uniqueness, significantly when revising the same concepts and aspects over time. So how can they write new content without affecting its quality or readability? Of course, with the paraphrasing tool.
Writers can take assistance from online paraphrasing tools to produce quality content. The paraphrase tools are developed with advanced Artificial Intelligence (AI) and Machine Learning (ML) algorithms, allowing them to generate phrases.
These online tools work against several principles and give various benefits for writing better content. But how does this technology work to paraphrase with human-level accuracy?
This article will discuss how AI and ML use paraphrasing to produce unique, readable content. We will also share components and generation techniques followed in AI and ML to paraphrase the content.
Elements of Paraphrasing | Paraphrase Tool
The AI-based paraphrase tool goes through a two-step process when paraphrasing text, which involves paraphrase identification and paraphrase generation, which have been elaborated on below.
1. Paraphrase Identification
The idea behind paraphrase identification is to analyze the flow of the content to see if it’s making the proper sense. In this step, the system yields the figure between 1 and 0, where figure 1 denotes that the sentence has the same meaning while 0 suggests that it deviates from the original purpose.
2. Paraphrase Generation
The next task is to paraphrase the content. In this step, the paraphrasing tool generates content with the help of Natural Language Processing. Although, This step exchanges some specific words with their synonyms but is still worthwhile.
However, Machine Learning and Artificial Intelligence algorithms bring out the mapping of terms to formulate unique and readable sentences.
Working with AI and ML in a Paraphrase Tool
Now we will discuss the complete functioning of a paraphrasing tool and how it compiles AI and ML technology to rephrase the content.
· Collection of Data
The first element of AI paraphrasing is data collection from various sources. However, The parameter of sources extends to almost every public platform, which may include thousands of sentences.
However, The idea is to develop data sets containing plenty of information that provides different data types to assist the paraphrasing models.
· Data Preprocessing
Data sampling selection is about increasing the diversity of data by filtering the original data provided to the system.
These tools accurately paraphrase the content due to the variety of data and training fed to the system. As the output result, the paraphrasing tool can generate content with correct meaning, variable vocabulary, and free of grammatical mistakes.
Moreover, the data processing makes the paraphrasing tool capable of presenting various versions of written content. In simpler words, data sampling selection allows us to rephrase the same content multiple times. This step also relates to the fluency of the content provided to the system.
· Model Building
Model building is about training the system to generate paraphrased content. In this step, the text-to-text transformer is used to prepare the plan with the help of a given data set. There are also pre-trained models for this purpose, T5-based models for text transformation.
Models like T5 use the self-attention technique to transform the receiving input sequence and generate the output. However, the result is similar in length to the input. Therefore, it is crucial to calculate the average input sequence.
What Are Pre-Trained Language Models | Paraphrase Tool
Pre-trained language models (also known as PLMs) are trained with large data sets of various languages that perform paraphrasing.
These models can assign other language-related tasks to integrate with multiple online or device-based applications.
Technology has evolved automation to perform tasks much more accurately.
Artificial Intelligence and also Machine Learning make a system act like a human. This way, we get precise results with the slightest chance of errors.
This article discussed a similar AI approach and ML-based models for paraphrasing. With the help of extensive data sets and pre-trained models, we can quickly develop tools that can accurately paraphrase the content.
We hope this article gives you valuable insights into how modern-day technology works. Moreover, it also elaborated on the step-by-step functioning of a paraphrasing tool.
|
OPCFW_CODE
|
Genghis Khan’s Guide To Www.gmail.com Login Excellence
That will available a screen, pre-populated together with the email gmail.com login addresses with the senders on the emails you highlighted. I hand-moved the belongings in hundreds of folders to a few big ones, so by any time I was done, I had about 20 folders, rather than 439. url to choose from the slew of user-made presets or upload your personal photo. Mobile broadband simply doesn’t belong to your jurisdiction with the 80-year-old telephone rules generally known as. However, should the user takes a little while to adjust, it really is easy enough to take care of. It is a worry for plenty of Australian organisations for example education, government, legal, medical, accounting etc. Google is clearly looking to reinvent how you employ email, but about what end.
How does operator respond when someone says, ‘This is being done. Clicking this bar returns the column containing Google Tasks. 1% CTR, but readily available placements I actually expected a CTR more much like search. Bananatag sends you an e-mail notifying you the initial time (but only the initial time) that the tracked email is opened, and can send separate email alerts every time the recipient clicks the link inside the track email. posted on its website, which saya that “When you perform sensitive online banking transactions, for instance money transfers, Citi will sometimes ask you additional questions to ensure your identity.
First off, Google Calendar is moving from to calendar. If you regularly occurs Gmail account to manage high-resolution photos or PDFs then you definately might find more matching messages than you expect. Just below that entry for Other Contacts you are able to see New Group. For instance, sending a message to someone right before the job day starts will ensure that they can see the email right in the top in their inbox if they check their email within the morning. The legislation currently prior to parliament will force Australian telecommunications companies to retain an as-yet-undefined pair of customer data for the minimum of couple of years. Here’s one step-by-step guide that can help you emulate Gmail’s archiving functionality with your Microsoft Outlook.
, the organization applies these techniques to many services across the corporation’s online empire. tool printed in Python and intended for OS X, Linux, and Windows. As already stated, now you can create your filters to categorize messages, thus putting them in a single of these tabs. Which you go searching for is around you, but ensure that the options correctly configured, otherwise you may find messages vanishing that you would like to keep. , an anti-phishing tool that raises a security alarm whenever an end user enters their Google account credentials in a untrusted site. If you’re continually archiving precisely the same type of message, consider starting a filter. While I can easily see where compromises were built to hit the retail price point, I can state that I (and several people) could well be quite very happy with devices with this price range as being a daily driver.
|
OPCFW_CODE
|
Zend Studio for Eclipse has finally arrived! I’ve been waiting for this upgrade for years. Zend Studio, in its previous incarnation, was certainly no slouch of a product. I’ve been using it since version 4. Aside from various Java platform issues, their product was extremely stable and increased my productivity over using other less featured editors.
My experience with Eclipse began in college while taking advanced programming courses in Java. It was an amazing product back then, and has only grown better since. However, the marriage of Zend Studio to Eclipse has proven to be a bit of a disappointment. Perhaps it is because I’m too used to the way that Studio 5.5 works and need to shift my IDE paradigm back to Eclipse. Maybe it’s the difference in how the system is configured by the preferences. Either way, it will take a bit of time to get used to and that’s just not something that I can afford at my job right now. Using Studio for Eclipse will have to be a weekend project to slowly ween myself off of 5.5 and learn how to manipulate Eclipse to assist, rather than hinder, my development.
One of the major pieces of integration that I’m interested in the new version of Studio is how well it handles remote files. My current development process requires that I use ssh/sftp to open and save files to a remote file system. In Zend Studio Neon (their beta version of the Eclipse integration, as well as in Eclipse in general), remote filesystem access was still a bit lacking and buggy. At first glance, it seems as if they have figured this out in Studio for Eclipse. However, I’m running into issues building the workspace through the remote filesystem. It’s slow and bogs down the Java process so I can’t develop while it’s building. I tried, heaven help me, to stop the build process, which locked up Studio, and now I can’t seem to open Zend Studio for Eclipse at all.
As for right now, I’m going to uninstall it and try re-installing over the weekend and playing around a bit more. They may call it Studio 6.0.0 — however, I think I need to treat this as a 1.0 release. Overall, I don’t think this release will be stable enough for me to use until a few more rounds of bug fixes, but I certainly like the direction that the platform is heading.
Both at home and at my job I utilize multiple computer systems, and now that I’m a Mac owner, it usually spans different operating systems. Here are a few utilities that I employ to assist in managing these computers.
- Synergy – (Cross Platform) This is quite possibly the most useful tool that I have found. It simply shares the keyboard and mouse on a system that you designate a server to any number of client systems. This is not a KVM, as each system requires its own monitor system. However, with Synergy and a fairly simple configuration, your mouse will flow from one system to another as if you’re using a single computer. In addition, the clipboard is shared between systems. Setup can be a bit tricky on non-windows machines, but binaries are available for Windows, OS X and Linux.
- UltraMon – (Windows) Ok, so you’ve got two monitors, and finally have the screen real estate to have many windows open in an order that you can create a comfortable work flow. Yet, all of your windows are stuffed onto your main screen task bar. Enter UltraMon. With this handy utility you can expand your task bar across multiple screens; tasks that are on screen 1 are on screen 1’s task bar, tasks on screen 2 on screen 2’s task bar. Aside from these great features, it also has other useful multiple monitor utilities. This is quite a great piece of software, and I highly recommend purchase of this product.
- Foxmarks – (Firefox Plugin, cross platform) Within the course of a day, I can use up to 4 distinct copies of Firefox. Two on my laptop (OS X and Windows XP in a virtual machine), my office desktop (OS X) and my home desktop (Windows XP). FoxMarks helps to bridge that gap by providing a easy and transparent way to sync bookmarks between copies of Firefox.
- LineIn (OS X) As documented by my post here on the trials and tribulations of moving sound between computers, LineIn is a great (and free) utility to enable line in monitoring on a Mac. Aside from the freebies on the same page as LineIn, Rogue Amoeba also provides other OS X based audio software to check out.
These tools help make my daily interaction with computers a bit easier, and I hope they’ll help other as well! If you know of other tools in the same vein, feel free to leave a comment or contact me.
Though it may be Dan‘s resolution to blog everyday, it’s certainly not mine, so I’ll let his post (mostly) speak for itself. I will add that we do share similar interests: We’re Buzz Out Loud listeners, work in the tech industry, were brought into the Mac world by the companies that we work for. It’s all quite interesting, although if there’s any place it would happen, it would be in San Francisco.
Read on, here!
|
OPCFW_CODE
|
Why are my associated entities null after saving?
I have an entity with a 1-to-1 relationship with another entity. They are appropriately connected in my entity model and the underlying tables are related by FK.
When I called .SaveChanges() my entity is saved. I can see that ID which has been generated and all the corresponding values.
I can also view the database manually and see that it has been saved correct.
However, the related entities are all null in my code. They show up null in the context menu and they resolve to null in code... If I call my get method using the new Id, I get the entity I expect, again with null related entities.
If I refresh the page (or re-construct the class) then everything is correct.
EDIT: More details.
In my specific case I have what amounts to a join-table, creating a dependency relationship between things.
Consider a table called "Things" with a column "ThingId"
Then I have another table called "ThingDependencies" with three columns "DependencyId", "FromThingId" and "ToThingId", both of which FK to Things.ThingId
I have asserted in code that this data is acyclical.
When I save a new ThingDependency it displays the persisted values appropriately... that is: it has it's Id, and the From and To Id values.
...but the associated entities are null and I can't access them until I dispose the data context and re-instantiate it... then it works just fine.
Is there any chance your entity is "detached"? This can happen if your data context gets disposed before you are done working with a particular entity.
@DanM not in this case. I can call the explicit "get" for the entity and it still returns with the null values. If I explicitly dispose the context and then re-instantiate it then the properties are populated.
You probably know this already, but another thought. If you are using Code First, have you made Dependency, ToThing, and FromThing virtual? I believe this is used by the proxy version of your entity to on-demand fill any data in your entity that is not "raw".
When I save a new ThingDependency it displays the persisted values
appropriately... that is: it has it's Id, and the From and To Id
values.
...but the associated entities are null and I can't access them until
I dispose the data context and re-instantiate it... then it works just
fine.
I can only refer to this part of your question which sounds like (but I may be totally wrong with my interpretation) that you are doing something like this:
int newThingDependencyId = 0;
using (var context = new MyContext())
{
ThingDependency newThingDependency = new ThingDependency();
newThingDependency.FromThingId = 1;
newThingDependency.ToThingId = 2;
context.ThingDependencies.Add(newThingDependency);
context.SaveChanges();
newThingDependencyId = newThingDependency.Id;
Thing fromThing1 = newThingDependency.FromThing;
Thing toThing1 = newThingDependency.ToThing;
}
using (var context = new MyContext())
{
ThingDependency newThingDependency = context.ThingDependencies
.Find(newThingDependencyId);
Thing fromThing2 = newThingDependency.FromThing;
Thing toThing2 = newThingDependency.ToThing;
}
And here you are wondering why fromThing1 and toThing1 are null but fromThing2 and toThing2 are not null, right?
If yes, then you have to replace ...
ThingDependency newThingDependency = new ThingDependency();
...by...
ThingDependency newThingDependency = context.ThingDependencies.Create();
Create will create a dynamic proxy that is able to load fromThing1 and toThing1 by lazy loading while an ordinary object created with the new operator is not. Find (or any LINQ query like Single, etc.) will also instantiate a proxy which is the reason why accessing the navigation properties in the second context works. All that under the assumption that your navigation properties are marked as virtual and you didn't disable lazy loading explicitly.
(If I'm totally wrong with this please for the love of all Things provide some code snippets in your question to show what you are doing exactly!)
Provalvemnete you are with Lazy Loading off.
Try using "Include (" Entity ") in your query.
var employees = db.Employees. Include (x => x.Address). ToList ();
Att
Julio Spader
wessolucoes.com.br
Lazy loading on or off would not cause this problem. Just to check anyway I tried it both ways.
|
STACK_EXCHANGE
|
What's the difference/relationship between Arduino and AVR?
I'd always thought Arduino was a microcontroller platform but the actual microcontroller is an AVR chip made by Atmel, or something like that made by someone else, based on a RISC ISA, and Arduino is usually used to refer to the whole circuit board powered by this AVR chip. Is my understanding correct?
What's the difference/relationship between Arduino and AVR?
Arduino is a prototyping board, and the term "Arduino" is also used to refer to the IDE and library on the PC side, and the whole ecosystem.
AVR is the architecture (developed by Atmel) of the microcontroller chip used in all official 8-bit boards, and almost all clones.
Arduino UNO and 2009, the most used boards, use the AtMega328P chip.
Many times, Arduino is used to quickly test some idea, sensor, and circuitry, then a stand-alone board is built around the AtMega chip, as it cost 1/10 of the Arduino board, soldered circuit on a stripboard or on a custom PCB are more reitable, and can be optimized on some aspect, like power usage, space occupied, high current/voltage, and so on.
Newest and advanced Arduino board use a different chip with a very different architecture; the Arduino Yun uses a SAM plus a classic AVR, the Due uses an ARM (same architecture used by many smartphones), the Galileo use an x86 (like a classic single-core cpu).
Arduino is a set of open-sourced hardware- and software specifications, originally conceived as a students' platform. I would add that the specifications, though openly available, are distributed under various public licenses.
There are "official" Arduino boards made by an Italian company of the same name, but as the board designs are open-sourced, there are lots of good (and a few "less good") variants from other sources.
AVR refers to the line of MCUs manufactured by Atmel and used in the original designs. Atmel was acquired in 2016 by Microchip Technology, who continue to manufacture the AVR devices.
Some of the more recent Arduino designs use more capable processors from other chip manufacturers, but many still use the Atmel/Microchip AVR processors. The smaller AVRs' relative simplicity (ATtiny & ATmega, f/ex) make them ideal for quickly designing/building boards and software for less demanding applications and for learning about programming and digital controls.
The Arduino is an AVR processor running special code that lets you use the Arduino environment.
AVR's can be used by themselves with some additional supporting components.
Arduino is a combination of both AVR(chip) and breadboard.
AVR is a single chip, and would require a breadboard.
For the record, you can use the Arduino environment for many AVR chips without any special code. The only extra thing you need is an AVR programmer which could be a $40, official Atmel one, or a $5 USPASP programmer, or even another Arduino running the ArduinoISP sketch.
Arduino is really a common set of code that makes using the dev boards they sell accessible to wide range of user base.
Arduino is basically an IDE that uses the C/C++ language and a set of classes that are adaptable to common set of hardware, predominantly Atmel and mostly AVR although as people have mentioned already the Arduino boards are becoming increasingly more powerful. But is amazing how much you can do with the ATmega328p.
You don't have to use the Arduino IDE to program your board and in fact, I tend to use Atmel Studio myself or Notepad++. You can get a plug-in for Atmel Studio 6.2 and above that allows you to create sketches and upload them to most Arduino boards.
The Arduino tools and ecosystem supports processors other than just Atmel AVR chips. For instance the Arduino Due uses an ARM Cortex-M3 processor.
Although the previous answers are technically correct, I think there is currently a shift in the meaning of "Arduino".
When the original Arduino boards were released they were head and shoulders better than the competition, especially when looking at the cost vs performance tradeoff.
But this is no longer the case. Boards based around the ESP8266 and ESP32 chips are cheaper and far more powerful than the Atmel chips. Also, the AVR IDE environment has been extended to support these chips. And to further confuse the issue this "Arduinio" forum has almost 2000 questions tagged ESP8266 and around 850 tagged ESP32.
So expect changes. They are already happening. Will "Arduino" come to mean any chip supported by the IDE? Will this forum move to a more generic name like IOT? I really don't know.
|
STACK_EXCHANGE
|
Last week the PhoneGap team held their 2nd annual European conference in the vibrant city of Amsterdam. Appropriately named PhoneGap Day EU, this was a conference perfectly crafted for every level of hybrid mobile app developer who uses the Cordova/PhoneGap framework.
The organizers picked a wonderful venue for the conference as well: the Compagnietheater. Centrally located in an older section of Amsterdam, it was a great spot for a conference that showcases some of the best of what the open source movement has brought to hybrid mobile app development.
PhoneGap Day conferences are relatively unique in that there is only one track for attendees. Each session is only 20 minutes long (10 minutes for vendor sessions - and let me tell you, does that get interesting). This format is definitely a double-edged sword: if you get bored you only have to wait a few minutes for the next speaker, but more likely, the speakers will leave you wanting more!
Day one of the conference was filled with workshops geared towards every level of PhoneGap developer. Christophe Coenraets and Holly Schinsky led a series of workshops that took you from a beginner PhoneGap developer to architecting more advanced apps (and what you can do with Cordova plugins). Google and Microsoft also gave workshops on how to develop Chrome Packaged Apps and Windows Phone 8 apps using the Cordova framework - separately of course :).
I personally had the privilege of not only attending the conference, but speaking as well. My session was on app store rejection - tips and tricks on how to avoid rejection, especially on the iOS app store. For those of you who are curious, my slides and notes are available for download. This talk was loosely based on my Icenium blog post on the same subject.
(Thanks to Vincent Hoogsteder for the picture and thanks to Reddit for the slide idea!)
Every speaker did a great job, and no I'm not just saying that. I'd like to take a moment to recognize the speakers here and encourage you to follow them on Twitter and/or look into their topics more closely:
The list goes on and on: Michael Brooks on the PhoneGap CLI was great and showcased the power of the CLI, Fil Maj showed off a really interesting end-to-end testing platform called appium, and Ally Ogilvie dove into the impressive Ejecta framework for developing performant HTML5 games with Cordova.
You may be asking, what does all of this have to do with Icenium? Well, Icenium uses the Cordova framework at its core, so everything that has to do with PhoneGap directly applies to what we are all building with Icenium. In fact, with the latest release of Icenium, we are now using the latest version of Cordova and include the Icenium Extension for Visual Studio as another development option!
If you are considering attending a future PhoneGap Day conference, trust me when I say it is well worth the time. It doesn't matter if you are a complete beginner or a seasoned hybrid mobile app developer, you will get something out of this conference and meet some cool people as well.
I'd like to send a big thanks out to some of the organizers, including Colene, Brian, Martijn, and PPK. Without them this never would have been possible.
Rob Lauer is Senior Manager of Developer Relations at Progress and has a passion for mobile app development and the open web. You can find Rob rambling as @RobLauer on Twitter.
|
OPCFW_CODE
|
Former Microsoft employee slams Windows 11's Start Menu design
The Windows 11 Start Menu is once again drawing flak from users, this time from a former Microsoft employee. Jensen Harris, who worked at the company for 16 years, slammed the Redmond company for ruining the Windows 11 Start Menu's design.
He posted a series of tweets to share his opinions about his experience with Windows 11. He termed the Start Menu as the flagship user experience, and says that he was shocked by its design in the latest OS.
Image courtesy: Jensen Harris
Former Microsoft employee criticizes Windows 11's Start Menu design
Microsoft recommends Edge as the default browser wherever possible, even going as far as to changing the default browser a bit complicated. This is something that has been widely criticized among users, the tech community, and even other browser makers. These ads extend to the Start Menu of the operating system. Harris compared the Edge recommendation in the right panel of the Start Menu's Search interface to the Internet Explorer toolbars from the 2000s.
He said that the Bing Wallpaper app ad at the top of the search result looks like banner ads from the Geocities-era. The ex-Microsoft employee seemed to have been equally appalled by the inconsistent design principles in the UI, particularly mentioning the corners of the ads and buttons, one of which was rounded, another one has a square edge, while a third has a squircle design. That's kind of ironic considering that the rounded corner design was touted by Microsoft as one of Windows 11's design standards.
Harris also questioned the company's intentions about placing ads in the Start Menu, asking whether the amount of money that the wallpaper app makes is worth "cheapening" the user experience. The former Microsoft engineer also criticized the migration of the Start button to the center of the taskbar.
Harris was the Director of Program Management for the Windows User Experience. So one can imagine he has a lot of insight about the GUI. He recalled the time when Microsoft once prioritized the design of the Start Menu, explaining how his team had created a special ligature for the font used in Windows (Segoe UI). They had to work on aligning the S and t in Start together. But that is no longer the case. Harris highlighted the importance of the UI, while mentioning that many designers whom he worked with are still at Microsoft. The talent is there, but it doesn't seem to be utilized correctly.
I think he is spot on with his arguments, the Start Menu is after all one of the most used features in the OS. But it is barely recognizable if you're coming from an older version of Windows, which in turn ruins the user experience. There are of course other issues in Windows 11, such as the lack of an option to move the Taskbar to the top or the sides of the screen, Taskbar right-click menu, etc.
Users have complained about the Start Menu in Windows 11 ever since the first preview version of the OS was released, but when a person who was formerly in charge of software design at Microsoft gives their opinion about the UI, it hits on a different level. It clearly shows that the company is not focusing on the user experience.Advertisement
|
OPCFW_CODE
|
React: Iframe rendering new line character \n as a single space
I am working on an application using Node.js, Express, React & Redux that enables users to write code into a text-editor, when the user hits 'run' to submit their code the code is then compiled using the JDoodle API.
As part of the JSON response returned from JDoodle I get the compiled output, so for instance:
System.out.println("Hello World!");
System.out.println("Hello Universe!");
would return:
Hello World!\nHello Universe!\n
The issue I am having is that I am rendering this output in an Iframe within a React component, and when I do so it is rendering the new line character \n as a single space. So for instance the above code would produce the following in the Iframe:
Hello World! Hello Universe!
I have tried giving the Iframe a className of iFrame and then in CSS doing the following:
.iFrame {
white-space: pre-wrap;
}
but no success! I also seen a few others mention escaping the backslash so replacing \n for \n:
.replaceAll('\n', '\\n')
but I cannot seem to get this working as every time I use the backslash character in quotes Javascript thinks I'm escaping the previous character.
I also tried
.split('\\')
but having the same issue as above.
Any help would be much appreciated, thank you!
EDIT:
I had read Override body style for content in an iframe which this question was marked as a duplicate for but it does not address my issue in any way. I have now resolved my problem and if the question could be reopened I can hopefully post the solution which may in turn help others facing the same issue. Thank you!
"Iframe a className of iFrame" — Styles applied to an iframe do not influence the document loaded into the iframe. To style that document you have to do so explicitly.
"I have tried giving the Iframe a className of iFrame and then in CSS doing the following" - of course that does not work; you do not want to format the iframe element itself, you want to format parts of the document that gets loaded into the iframe - so that is where you have to apply any formatting.
The output in the response from the API comes back with new line characters in it and when I log this output in the console it converts any new line characters to a carriage return. It is only in the iframe where the new line character is converted to single space so I'm unsure how I can possibly format the output before loading it into the iframe if it's already in the format I want it to be in before being rendered in the iframe. Hence why I feel like it's problem to do with how the new line character is being handled in the iframe
@seshkebab Well, what was causing the problem?
|
STACK_EXCHANGE
|
Every year McDonald’s has a Monopoly-based campaign. I built the accompanying website in 2013.
Take a chance
This was a fun bit of engineering.
The website centred around the “take a chance” mechanic in Monopoly, where a player draws a card at random and some effect takes place. Depending on the card drawn (“second prize in a beauty contest”, “go to jail”, and so on) an image was generated featuring the user’s likeness (chosen from their Facebook photos). As part of choosing the image, the user lined up their photo according to a dotted line by panning, zooming, and rotating it. (While the design had just a dotted oval line for the face, I had the idea of including an extra dotted line for the character’s moustache, which went down really well.) The parameters of these transformations were sent to the server along with the image URL.
One type of generated image had the user’s face on Rich Uncle Pennybags’s body, behind bars, in jail. Another simulated the user’s face printed on some Monopoly money.
For all of these a task was queued and then one of multiple render nodes would pick it up and start work as soon as capacity allowed. The jobs involved getting the user’s image from S3, processing it according to the share image type. The processing involved cropping, resizing, and SRT transformations according to the user’s parameters, applying effects, and compositing the image with various masks and background and foreground layers. Most of this was done with Imagemagick.
The most complicated image type had the user’s face embossed on a gold medallion worn around the neck of a 3D-rendered leather-clad character. Here’s how this was achieved.
- Most of the scene was pre-rendered as a background layer.
- I analyzed the Photoshop file which composited the original render with some effects such as lens flares and colour correction, and reproduced this as some overlays which could be applied with Imagemagick.
- I then took the 3D scene file into Blender and isolated the gold medallion, and readied it for custom normal maps. This gave a new input file for the render node.
- From a flat photo such as those we were getting from the users, there’s no real way to produce a normal map, but one can be faked, such as with a Gimp plugin. I took this plugin and wrote a script for headless Gimp (Gimp scripts are written in Scheme!) to produce a fake normal map for the user’s image.
- This normal map was then taken into Blender via a Python Blender script (scripting Blender was a far more pleasant experience than scripting Gimp!), it was applied to the medallion, and then that portion of the scene was rendered, all headless.
- The various layers were then composited, masked, and blended via an Imagemagick script.
Once the share asset was ready, it was stored on S3 and the user could download or share it.
The campaign was popular, and the render nodes happily churned out thousands of images with minimal waiting time for the user. There was one mishap with a render node one day, and it’s a funny story I’ll tell you about over a beer some time if you ask.
|
OPCFW_CODE
|
If fuel was automated then we all would be yelling and screaming why is our ships running to bases when we gave them the order to attack a colony or a ship or pirates, or buid this and it goes there. Us humans all think differently. I believe this fuel issue really can't be delt properly. You think automations should work this way, I might think it should work that way, and the developer might think of something else. So if this is ever implimented somone is going to cry foul and say it is broke and it should be like that.
If I have ships in a fleet, I don't care if they run out of fuel, I might want them to do thier job that I set out them to do. If anything, this automation has to be able to be turned on or off, or have a set number of rules to fallow that we as the human player pick for what ships to choose what fuel plan they want.
So I guess what we need is a fuel plan for the ships to fallow.
That's why the first thing I suggested was one or two different options to turn it on/off. But maybe making a toggle option on the ship itself (like with automation) is a better idea?
The idea behind all of this is that if you enable automation you probably want them to refuel more intelligently.
For fleets, fuel should never be a hassle. Right now we can control only 10 of those, so make it worthwhile by letting us fine-tune, for each fleet separately, how refueling is performed:
- Let us set a desired level of fuel, something like 20%, 40%, 60% (+ 80% maybe). Any ship dropping below that level will go refueling. Reason: when I keep a fleet ready, I want it to be ready and not low on fuel when a threat pops up, so I am inclined to set a desired fuel level of 60% or even 80% for fleets that are idle/in reserve. This fuel level only applies for ships that are idling. A ship either engaged in combat or heading into combat (could be a manually settable state on fleet management screen) instead handles fuel according to this:
- Another desired level of fuel for when combat is broken off (like 5%, 10%, 20%, ...) When below, ship runs, preferaby to fuel source. A ship out of fuel in combat is dead unless the engagement is already won. This would require the game to have a notion of when a ship is in combat or when not.
- Settings defining whether the above levels are mandatory, i.e. ships go refueling autonomously, just issue a warning (, or ignore the conditions).
- Let us set a desired (note, not required) location where to refuel. It could default to the home colony but should not be restricted to that, settings like 'nearest refueling point/starbase', specific Mining Stations, a deployed Resupply Base should also be eligible. In case of one or more explicitly given colonies this should create the demand at the selected locations which gets the fuel actually transported there (hopefully). Even better: it could reserve fuel at the selected location(s) for the fleet and nobody else. Regarding the handling of desired vs. required: A ship should at the very least prefer fuel sources that are reachable without running out, even if not on the list.
- I still think this game needs fuel tankers under military control which should be assignable to fleets and then do nothing but keep the ships therein fueled and ready while those follow their other duties.
- Each ship that goes refueling should enter a state where the usual commands issued to a fleet as a whole do not apply. The fleet management screen should explicitly list such ships as are refueling and should allow us to override the state there. This could be handled similarly to ships in an active engagement where the combat levels of fueling apply.
- Some reasonable, better configurable, global defaults that can be set on a button click to apply when we want to make a fleet battle ready without too much hassle.
Of course, refueling settings shouldn't be the only thing that we can fine-tune for a fleet. With tankers (and constructors!) assignable to fleets, these'd become more complex organisms where ships could be explicitly assigned to escort duty. Then there'd be settings about whether constructors or tankers go into combat zones to do their job or not. A designation of an area for the baggage train plus escort and so on.
< Message edited by sbach2o -- 4/13/2010 2:19:47 PM >
|
OPCFW_CODE
|
Susy: Push(pre) and Pull(
The following code places my sidebar-first one column off the left of the screen.
.has-sidebar-first {
.l-content {
@include span-columns(15 omega, 16); // Span 15 out of 16 columns.
@include push(1, 16); // Push element by adding 1 out of 16 columns of left margin.
}
.l-region--sidebar-first {
@include span-columns(1, 16); // Span 1 out of 16 columns.
@include pull(1, 16); // Pull element by adding 15 out of 16 columns of negative left margin.
}
}
The sidebar-first should take up the first column and content should take up the next 15.
I have had pull set 1 through to 16 but it is either out of place or disappears entirely.
Any suggestion?
Update1: Here is the full scss layout(including the suggestion from Eric Meyer (the man himself!) places the sidebar-first further off page to the left. It appears to be offpage by the same width as the l-content .
@import "susy";
// Susy Variables
// Set consistent vertical and horizontal spacing units.
$vert-spacing-unit: 20px;
$horz-spacing-unit: 1em;
// Define Susy grid variables mobile first.
$total-columns: 4;
$column-width: 4em;
$gutter-width: $horz-spacing-unit;
$grid-padding: 5px;
$container-style: magic;
$container-width: 1200px;
// Susy Media Layouts @see http://susy.oddbird.net/guides/reference/#ref-media-layouts
$tab: 44em 12; // At 44em use 12 columns.
$desk: 70em 16; // At 70em use 16 columns.
.l-header,
.l-main,
.l-footer {
@include container; // Define these elements as the grid containers.
margin-bottom: $vert-spacing-unit;
}
.l-region--highlighted,
.l-region--help,
.l-region--sidebar-first,
.l-region--sidebar-second {
margin-bottom: $vert-spacing-unit;
}
@include at-breakpoint($tab) { // At a given Susy Media Layout, use a given amount of columns.
.l-header,
.l-main,
.l-footer {
@include set-container-width; // Reset only the container width (elements have already been declared as containers).
}
.l-branding {
@include span-columns(4, 12); // Span 4 out of 12 columns.
}
.l-region--header{
@include span-columns(8 omega, 12); // Span the last (omega) 8 columns of 12.
}
.l-region--navigation {
clear: both;
}
.has-sidebar-first,
.has-sidebar-second,
.has-two-sidebars {
.l-content {
@include span-columns(7, 12); // Span 7 out of 12 columns.
@include push(1, 12); // Push element by adding 1 out of 12 columns of left margin.
}
.l-region--sidebar-first, {
@include span-columns(1, 12); // Span the 1 columns of 12.
}
.l-region--sidebar-second {
@include span-columns(4 omega, 12); // Span the last (omega) 4 columns of 12.
}
.l-region--sidebar-first {
@include pull(8, 12); // Pull element by adding 8 out of 12 columns of negative left margin.
}
.l-region--sidebar-second {
clear: right;
}
}
}
@include at-breakpoint($desk) {
.l-header,
.l-main,
.l-footer {
@include set-container-width; // Reset only the container width (elements have already been declared as containers).
}
.l-branding {
@include span-columns(6, 16); // Span 6 out of 16 columns.
}
.l-region--header{
@include span-columns(10 omega, 16); // Span the last (omega) 10 columns of 16.
}
.has-sidebar-first {
.l-content {
@include span-columns(15 omega, 16); // Span 15 out of 16 columns.
}
.l-region--sidebar-first {
@include span-columns(1, 16); // Span 1 out of 16 columns.
}
}
.has-sidebar-second {
.l-content {
@include span-columns(12, 16); // Span 12 out of 16 columns.
}
.l-region--sidebar-second {
@include span-columns(4 omega, 16); // Span the last (omega) 4 columns of 16.
clear: none;
}
}
.has-two-sidebars {
.l-content {
@include span-columns(10, 16); // Span 10 out of 16 columns.
@include push(1, 16); // Push element by adding 1 out of 16 columns of left margin.
}
.l-region--sidebar-first {
@include span-columns(1, 16); // Span 1 out of 16 columns.
}
.l-region--sidebar-second {
@include span-columns(5, 16); // Span 5 out of 16 columns.
}
.l-region--sidebar-first {
@include pull(11, 16); // Pull element by adding 11 out of 16 columns of negative left margin.
}
.l-region--sidebar-second {
@include omega; // This element spans the last (omega) column.
clear: none;
}
}
}
.has-two-sidebars is working as desired. I am only hoping to fix .has-sidebar-first when @include at-breakpoint($desk) . If there is something inherently wrong with how it is set up then I will have to change the lot but I am hoping to simply change the the layout when viewed on a desktop where the is no sidebar second.
Thanks
Update 2
Following the suggestion to add margin-left: 0; here is it added.
.has-sidebar-first {
.l-content {
@include span-columns(15 omega, 16); // Span 15 out of 16 columns.
}
.l-region--sidebar-first {
@include span-columns(1, 16); // Span 1 out of 16 columns.
margin-left: 0;
}
}
While this now aligns the 'side-first' to the correct column, it appears below the content, as per the picture:
The rest of the code is the same. The two sidebar option still displays correctly.
Any suggestions?
Solution:
As per Eric's suggestion I needed to clear and previously declared push and pulls. So the correct code is:
.has-sidebar-first {
.l-content {
@include span-columns(15 omega, 16); // Span 15 out of 16 columns.
margin-left: 0;
}
.l-region--sidebar-first {
@include span-columns(1, 16); // Span 1 out of 16 columns.
margin-left: 0;
}
Thanks
Get rid of both the push and the pull. Neither one is needed. Your omega item is floated right, and the other item is floated left, so both will fall perfectly into place without needing any push/pull help.
update:
You have a pull set on .l-region--sidebar-first at one of the smaller breakpoints, which is still being applied at the larger breakpoint. You just need to set margin-left to 0 at the $desk breakpoint.
thank you for the reply Eric, and all the time you have put into Susy. I tried removing the push and pull but is doesn't display correctly. I have update to question which now includes the full scss. Does anything pop to mind as a quick fix?
I have added the margin-left 0 and updated the question. Thanks
Yep, you still have a push on the content, pushing it over 1 of 12. You may have other things like that, which I'm missing. The point is: if you add things earlier in the cascade, you have to remove them later if you don't still need them.
I added another margin-left 0 this time to the content and all sorted. Thank you for help out and being so patient with me!
|
STACK_EXCHANGE
|
Lossless audio codec comparison revision 5 - hi-res part
This document compares the performance of lossless audio codecs on high resolution material, which are PCM audio files with more than 16 bit per sample or a samplerate higher than 48kHz.
The test corpus consists of four 96kHz, 24 bit per sample sources, three 192kHz, 24 bit per sample sources, one 192kHz, 16 bit per sample source and two 352.8kHz, 24 bit per sample sources.
To compare the performance of each codec, the following steps are followed for each combination of corpus file, codec and codec setting:
- A WAV file is placed on a large enough ramdisk
- An MD5sum is calculated for the WAV file excluding its header
- The WAV file is encoded by the chosen codec provided with the required settings. The amount of CPU time required to do this conversion is measured and the resulting filesize is recorded
- The encoded file is decoded by the chosen codec. The amount of CPU time required to do this conversion is measured
- An MD5sum is calculated for the resulting decoded file excluding its header
- The MD5sum of the provided WAV file and the decoded file are compared
The following codecs and settings are used:
|FLAC||-0, -1, -2, -3, -4, -5, -6, -7, -8|
|WavPack||-f, [-], -h, -hh, -x4f, -x4, -x4h, -x4hh|
|TAK||-p0, -p0e, -p0m, -p1, -p1e, -p1m, -p2, -p2e, -p2m, -p3, -p3e, -p3m, -p4, -p4e, -p4m|
|OptimFROG||presets 0 through 10 and max|
|Monkeys Audio||-c1000, -c2000, -c3000, -c4000, -c5000|
|MP4ALS||-a -o5, -a, -a -o20, -a -o40, -a -b, -a -b -o20, -a -b -o40, -a -b -o1023, -7|
For each codec, the latest Windows binary provided by the author of the codec is used, no specially tuned compiles are used. In case of ALAC, encoding is done with refalac64 as provided by QAAC. In case of WMA, encoding is done with WMAEncode.exe (which uses the encoder provided by Windows 10) and decoding with FFmpeg 5.0.
Not all codecs appear in all results, for example Shorten and La only support 16 bit per sample sources and WMA does not support samplerates above 96kHz.
Measurements are made on a Windows 10 machine with a AMD A4-5000 CPU with 4GB of RAM. This CPU has all x86 instruction set extensions up to and including AVX (i.e. lacks AVX2). Measuring the CPU-time used is done with timer64.exe, part of the 7-max/7-benchmark suite.
Timing is done per track. This measured time is divided by the content length, i.e. the execution time of the encoding or decoding process in seconds divided by the playback length of the track or chapter in seconds. The result of this division is called CPU-usage. The filesize of the encoded file is divided by the filesize of the original WAV file to calculate a compression.
Results per source are obtained by averaging the compression and CPU-usage so each track or chapter contributes the same amount to the average, i.e. length of the track or chapter is not incorporated.
The average of all sources is obtained by averaging the results per source, again without any weighing. The total results are therefore not influenced by the length of the corpus content, each source contributes an equal amount to the average.
The best results will be obtained in the bottom left corner of the graphs: this represents the best compression (smallest file size) and the lowest CPU-usage (fastest compression and/or decompression).
In the graph each codec is represented by a group of markers connected by a line. Each combination of settings as mentioned in the previous table corresponds to one marker, in the order listed in the table. The first combination of settings is (usually) the fastest and the last the slowest. Therefore, the marker closest to the upper left corner of the graph corresponds to the first listed combination of settings, and the marker closest to the lower right corner corresponds to the last listed combination of settings.
Looking at the average of hi-res sources up and until 192kHz, TAK seems to be the clear winner. There is not a single combination of settings for TAK where it is beat in both file size and either encoding or decoding speed by any other codec. Its decoding speed is second only to FLAC, but the difference is small while producing much smaller files.
When looking at all hi-res sources, TAK is no longer present (as it does not support sample rates above 192kHz). If encoding speed is deemed of lesser importance than decoding speed, WavPack performs very well when used with the -x4 setting.
TTA holds a interesting spot, with reasonably fast encoding and decoding and a reasonable compression. WavPack -x4 compressed better and decodes faster but encodes slower. ALS compresses better but is slighly slower for both encoding and decoding.
OptimFROG with --preset max produces the smallest files in this test, albeit at the cost of very slow encoding and decoding. Except for the 96kHz files, the test CPU was not able to decode the OptimFROG --preset max files in realtime, thus making playback impossible.
Monkey's Audio shows erratic behaviour, with its two of its faster presets (-c2000 and -c4000) outperforming the next slower preset (-c3000 and -c5000 respectively) by a non-negligeable margin.
When looking at the individual results, the chiptune source (Junichi Masuda, Go Ichinose - Pokemon Gold) stands out as very different.
|Artist - Album||Format||Comment||Year|
|Carmen Cavallaro - Songs of Our Times, Song Hits of 1921||96kHz 24-bit||78rpm digitisation||1921|
|Cascades - Cascades||96kHz 24-bit||Digital download||2017|
|Coldplay - A Head Full of Dreams||192kHz 24-bit||BD-A rip||2016|
|Frode Fjellheim - Kyrie (Cantus & Frode Fjellheim)||352.8kHz 24-bit||Digital download (2L.no testbench)||2015|
|Johnny Cash - I Walk The Line||192kHz 24-bit||Digital download||2010|
|Junichi Masuda, Go Ichinose - Pokemon Gold||192kHz 16-bit||Generated||1999|
|Nine Inch Nails - The Slip||96kHz 24-bit||Digital download||2008|
|Sound Liaison - The Visual Sound (DXD Music Sampler)||352.8kHz 24-bit||Digital download||2019|
|Pink Floyd - The Dark Side of the Moon||96kHz 24-bit||BD-A rip||1973|
|Warren Zevon - Mutineer (perf. Jenna Mammina, Matt Rollings)||192kHz 24-bit||Digital download (Blue Coast Music sampler)||2011|
|
OPCFW_CODE
|
Bitcoin Private Keys are created by Bitcoin wallets and they represent the identity of fund ownership. To spend funds on the Bitcoin Blockchain, a transaction signed with a private key must be presented by a Bitcoin wallet. Private keys are not created through any cooperative network process, instead they are simply chosen at random from within an impossibly large number, two to the two hundred and fifty-sixth power. These keys must be kept secret to prevent fund loss, so typically they are hidden by wallets to avoid accidental exposure.
Every time coins are received, users create a new private key, from which a one-way derived public address to give out is generated. Private keys should not be copied between wallets except as whole wallet backups, however in some instances it may be necessary for their direct use. In this case, the best practice is to direct a wallet to use the private key in a new transaction that consumes all funds controlled by the private key, a method known as sweeping. The alternative method, adding the private key to an existing wallet, known as importing, is not recommended and can lead to the loss of funds.
Private keys are simply large numbers, which computers store in a binary form that is not convenient for direct use. The common convention for direct use of private keys is through the WIF encoding standard, which uses an encoding that helps discourage typos and an algorithmic checksum which helps determine validity, as well as a small type hinting signal to indicate the private key type. Bitcoin private keys in this format look like strings of letters and numbers, either starting with an L in a compressed format or a 5 in an uncompressed version.
Security of Private Keys
It may be hard to grasp that a large random number may be secure, why can't it simply be guessed and the funds stolen? The answer lies in the extremely large size of the random number: so large that guessing is impractical. It can be thought of as someone trying to guess a number picked out of a million by shouting guesses. It would take a long time, because a million is large. The same principle works for Bitcoin, the number is so impossibly large that the time required is impossibly long, even for the fastest possible guessing system.
The number of possibilities to brute force a private key are massive: two to the one hundred and sixtieth power for a standard spending collision, a number inconceivably high even assuming exponential growth of computing power. And even the mining process itself acts to shield against key collisions: since mining itself presents a reliable reward for collision finding, it's a more profitable alternative to brute forcing private keys.
Still, there are potential flaws and weaknesses in the scheme that reduce the security of a private key. One of the most common weaknesses is simply to deliberately or accidentally choose a lower random range for a private key. This sounds unlikely, but some Bitcoin enthusiasts have experimented with lowering their private key entropy to allow them to directly remember their private keys. This has led to coin loss, humans are ill-equipped to judge entropy and as a result people have chosen guessable keys.
Another issue may simply be that software has been incorrectly coded through malice or accident to use a more guessable number. This has also led to coin loss: correctly choosing a random number is a lynch pin of the Bitcoin security model and there is no safety failsafe to protect against a weak number, other than some benevolent individuals who constantly scan for these mistakes to help restore funds that are left out in the open.
Going into the theoretical, another weakness exists in the fund security system: public key attacks. Unfortunately there is a weakness in the public and private key cryptography used where knowledge of the public key counterpart to a private key drastically reduces its resistance to brute force guessing, to two to the power of one hundred and twenty eight. Even though this number is billions of times smaller than a private key, it's still considered almost impossibly large. To protect against this, early on in Bitcoin's development the transaction system was altered to hide public keys up until the ten minute period where they enter the Blockchain. This prevents a brute force attack on a public key by only offering a short time window for attack, however reusing an address means funds sent to that address are vulnerable to a brute force attack. This is one of the multiple reasons that address reuse is strongly discouraged. Large amounts of Bitcoin funds are nevertheless secured behind old or reused address transactions that are exposed to public key attack.
It's considered very unlikely that signature brute forcing would present a real threat to private key security, and were it to happen, the longstanding plan would be to adjust the cryptography in a preemptive move before any attack might become feasible. But much more worrying than an attack on the public keys by standard brute forcing, is the potential for a quantum computing brute force attack.
The possibility of quantum computing attack makes it quite important to hide public keys by avoiding address reuse as an attack within a lifetime starts to become feasible when considering quantum computing advances. For a threat to become apparent, quantum computing would require improvements to make them hundreds or thousand times more powerful, but for technical improvement over many years this is quite possible.
In addition to avoiding address reuse, efforts towards which have been underway for many years, there is a possibility of adjusting the signature cryptography to be quantum resistant. As a final prevention method the network might eventually consider the funds stored by a vulnerable signing method to be invalid and lost, and prevent their movement with any signature.
|
OPCFW_CODE
|
package seedu.address.logic.comparators;
import java.util.Comparator;
import seedu.address.logic.parser.exceptions.ParseException;
import seedu.address.model.patient.Patient;
/**
* Contains comparators for each patient attribute
*/
public class PatientComparator {
/**
* Comparator to sort via Name
*/
private static Comparator<Patient> compPatientName = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getName().toString().compareTo(p2.getName().toString());
}
};
/**
* Comparator to sort via Phone number
*/
private static Comparator<Patient> compPatientPhone = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getPhone().toString().compareTo(p2.getPhone().toString());
}
};
/**
* Comparator to sort via Email.
*/
private static Comparator<Patient> compPatientEmail = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getEmail().toString().compareTo(p2.getEmail().toString());
}
};
/**
* Comparator to sort via Address.
*/
private static Comparator<Patient> compPatientAddress = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getAddress().toString().compareTo(p2.getAddress().toString());
}
};
/**
* Comparator to sort via Nric.
*/
private static Comparator<Patient> compPatientNric = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getNric().toString().compareTo(p2.getNric().toString());
}
};
/**
* Comparator to sort via Date of Birth.
*/
private static Comparator<Patient> compPatientDob = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p2.getDateOfBirth().compareTo(p1.getDateOfBirth());
}
};
/**
* Comparator to sort via Sex.
*/
private static Comparator<Patient> compPatientSex = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getSex().getSex().compareTo(p2.getSex().getSex());
}
};
/**
* Comparator to sort via Drug Allergy.
*/
private static Comparator<Patient> compPatientDrug = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getDrugAllergy().toString().compareTo(p2.getDrugAllergy().toString());
}
};
/**
* Comparator to sort via Description.
*/
private static Comparator<Patient> compPatientDesc = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getPatientDesc().toString().compareTo(p2.getPatientDesc().toString());
}
};
/**
* Comparator to sort via NextOfKin's Name.
*/
private static Comparator<Patient> compPatientKinName = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getNextOfKin().getName().toString().compareTo(p2.getNextOfKin().getName().toString());
}
};
/**
* Comparator to sort via NextOfKin's Relation
*/
private static Comparator<Patient> compPatientKinRelation = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getNextOfKin().getKinRelation().toString()
.compareTo(p2.getNextOfKin().getKinRelation().toString());
}
};
/**
* Comparator to sort via NextOfKin's Phone
*/
private static Comparator<Patient> compPatientKinPhone = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getNextOfKin().getPhone().toString().compareTo(p2.getNextOfKin().getPhone().toString());
}
};
/**
* Comparator to sort via NextOfKin's Address.
*/
private static Comparator<Patient> compPatientKinAddress = new Comparator<Patient>() {
@Override
public int compare(Patient p1, Patient p2) {
return p1.getNextOfKin().getAddress().toString().compareTo(p2.getNextOfKin().getAddress().toString());
}
};
public static Comparator<Patient> getPatientComparator(String parameterType) throws ParseException {
Comparator<Patient> paComp;
switch (parameterType.trim()) {
case "name":
paComp = compPatientName;
break;
case "phone":
paComp = compPatientPhone;
break;
case "email":
paComp = compPatientEmail;
break;
case "address":
paComp = compPatientAddress;
break;
case "nric":
paComp = compPatientNric;
break;
case "dob":
paComp = compPatientDob;
break;
case "sex":
paComp = compPatientSex;
break;
case "drug":
paComp = compPatientDrug;
break;
case "desc":
paComp = compPatientDesc;
break;
case "kinN":
paComp = compPatientKinName;
break;
case "kinR":
paComp = compPatientKinRelation;
break;
case "kinP":
paComp = compPatientKinPhone;
break;
case "kinA":
paComp = compPatientKinAddress;
break;
default:
throw new ParseException("");
}
return paComp;
}
}
|
STACK_EDU
|
Everything you could possibly want to know about Internet Email – Part 1
In this series of posts, I'm going to share what I know about email. Over the last 20 years, I've learned to set up, diagnose, and do just about everything with email that you can imagine at any scale. In this series, I'll be talking about email over the Internet, which uses protocols like SMTP and IMAP. I won't be talking about the inner workings of commercial email software like Exchange. Understanding how email is sent As mentioned above, email works with a few protocols, and the one used to send email is SMTP. The idea here is the sender's [...]
Figuring out US TIme zones
Twice a year, many people in IT often get mixed up by time zones in the US. Understandably, as it is a little tricky to wrap your head around. Here are some tips when sorting it all out in the US. Timezones in the US change twice a year, in the fall and spring, with an offset of 1 hour in either direction. The exception to #1 are the States of Arizona and Hawai'i - they never change their clocks like the rest of the US. In the spring, around March, the US switches to Daylight Savings Time, so we [...]
Consider Benevolent Dictatorships
I've started, run, and sold a few successful companies. Each one was different from the last one because I was constantly learning. But early on, I took on a management method that I used throughout my career - I ran each as a Benevolent Dictatorship. Now before you go looking it up on Wikipedia, it's a term I used with my own definition. A Benevolent Dictatorship, in my terms, means I was very interested in other opinions on problems as they often brought me to see new ideas. Opinions of others added a perspective that fleshed out details I didn't [...]
Why Use Zabbix Templates?
There's nothing as tedious as setting up the same type of server over and over, adding the same Items and Triggers. In any environment, we tend to use the same types of servers because we've standardized on them. It could be in your house ("My family is all on Apple products") or at work ("We're a Linux and Windows shop"). So if you're setting up 20 laptops, it's the same Items and Triggers on each one. To handle this headache, Zabbix uses Templates to make the configuration easier. At the Host level, you can add Items and Triggers one by [...]
Why Use Zabbix Global Macros?
Once your Zabbix configuration starts to monitor for that a few hundred Items, you're going to be relying heavily on Templates. Along with using Templates to replicate the monitoring of more than one server, you probably need to start using Global Macros. The concept of a Template is "write once, use many", and Global Macros are very similar. An example might be a license key for an external service or the credentials for an internal service (there are better ways of storing secrets), perhaps a SQL database that is used just for Zabbix. Using these credentials and values scattered throughout [...]
Why is Rebooting Linux Pointless?
Spoiler alert: It's not pointless. I've taken a lot of flack over the years in saying that it's OK to reboot Linux. While you can read on the 'net how it's pointless to do so, a statement like that has too many nuances to be considered fact. A better statement would be to say that, in many cases, Linux doesn't need to be rebooted (it's not MS Windows, after all). Linux does an amazing job of recovering from runaway RAM issues and even out-of-disk issues. I've even seen CPU usage bring a Linux server to a standstill, only to have [...]
|
OPCFW_CODE
|
How is a parameter change handled during recursion in Java?
Here is a recursive method:
private static int minimumTotal(List<List<Integer>> triangle, int index) {
if (triangle.isEmpty()) {
return 0;
}
List<Integer> row = triangle.remove(0);
int sum1 = row.get(index) + minimumTotal(triangle, index);
int sum2 = row.get(index) + minimumTotal(triangle, index + 1);
return Math.min(sum1, sum2);
}
I want sum1 and sum2 to be calculated on the same triangle object. However, what happens is the following: after sum1 is calculated, one row of triangle (and then another within the recursion and another ...). Now, when sum2 is calculated it has a triangle that is empty!
This confuses me regarding how Java handles recursion. Why is the object triangle getting modified? I was under the assumption that it should be a "local" data at every recursion level.
How can we rewrite the code to get the desired behavior?
As an example, lets say the triangle object has two rows (given by two list of integers). sum1 is supposed to get something from the first row and then recursively call the method on the triangle that has only one row remaining. Similarly, sum2 is also supposed to get something from the first row and then recursively call the method on the triangle that has only one row remaining. However, what I see is the following. After sum1 is computed, triangle is empty. Thus, sum2 is assigned a wrong value!
There is only one List<List<Integer>> triangle instance. The reference to that instance is passed to each recursive call, but any change done to this instance is reflected in all levels of the recursion.
It looks like you want to process the next row of triangle in each recursive call, and you achieve it by removing the first row (so that the next recursive call has a different first row). If that's the case, you don't have to remove the first row. Just pass an index to the row, so that each recursive call knows which row it should work on.
private static int minimumTotal(List<List<Integer>> triangle, int rowIndex, int colIndex) {
if (rowIndex >= triangle.size()) {
return 0;
}
List<Integer> row = triangle.get(rowIndex);
int sum1 = row.get(colIndex) + minimumTotal(triangle, rowIndex + 1, colIndex);
int sum2 = row.get(colIndex) + minimumTotal(triangle, rowIndex + 1, colIndex + 1);
return Math.min(sum1, sum2);
}
Thanks! I found this solution most easy to implement and intuitive.
In Java, everything except the primitive types (int, boolean, char, etc..) is a reference type. It means that what you actually hold in the variable is just a reference to the actual object. If you pass it to a method, the reference gets copied, but it still points to the same object in memory. That means that every call of your recursive method points to the same triangle object and since the first recursive call removes all the items from the List, the second call will always be given just the empty List.
The straightforward way would be to create a two duplicates of the List before each recursive call and passing references to those duplicates. This would, however, be a very time consuming and inefficient way.
I would suggest adding a third parameter to the recursive call saying where to start with the calculation. The method wouldn't change the original triangle in any way.
private static int minimumTotal(List<List<Integer>> triangle, int index, int from) {
if (from >= triangle.size()) {
return 0;
}
List<Integer> row = triangle.get(from);
int sum1 = row.get(index) + minimumTotal(triangle, index, from + 1);
int sum2 = row.get(index) + minimumTotal(triangle, index + 1, from + 1);
return Math.min(sum1, sum2);
}
Your initial call would look like this:
minimumTotal(triangle, index, 0);
This method will even be faster than your original one, because remove(0) is an O(N) operation.
Almost correct, except you have to change the if (triangle.isEmpty()) condition at the start to if (from >= triangle.size()). Otherwise, your recursion will end with IndexOutOfBoundsException.
You're changing triangle parameter internal state here:
List<Integer> row = triangle.remove(0);
Since you're calling minimumTotal over and over, on every recursive call triangle losses a row. This is not related to how Java (or any other programming language) handles recursion, but about what you do in the recursive call.
If you only want/need to retrieve the element at position 0, then use triangle.get(0). If you need to be removed but only after doing the necessary operations on it, then call triangle.remove only after performing the desired actions. In case you don't remove any inner list of your triangle, then use triangle.get(index) and pass index as parameter.
Always using triangle.get(0) would lead to infinite recursion, since triangle being empty is what stops the recursion. If you don't modify triangle, you should introduce an additional parameter to the recursive method, in order for it to terminate.
@Eran if you call triangle.remove(0) (as stated in my answer previously to your comment) then the recursive call will end.
If you don't change triangle or index prior to the first recursive call - minimumTotal(triangle, index) - that call gets the exact same parameters as the previous level of the recursion, so the recursion can never end, regardless of whether or not you are removing the first row after that call.
|
STACK_EXCHANGE
|
XGIMI MOGO Pro+ Support
@fantasytu First of all thanks for your work and your plugin
Is your feature request related to a problem? Please describe:
I successfully installed your plugin and was able to add the device in home app.
I received my XGIMI MOGO Pro+ today, and it is set up with the latest firmware and running.
I selected a permanent IP to the projector, but after restarting Homebridge I can't control the XGIMI.
Describe the solution you'd like:
I was also not able to get the package attribute
http://<IP_ADDRESS>:16741/data/data/com.xgimi.vcontrol/app_appDatas/list
Server not found.
However, the Button in Home app is showing when the device is turned off/on when I turned it off/on manually on the device.
But with Home App I can't control anything. Not on/off and not selecting a source/app.
The projector is not doing anything.
I hope you can help and of course, please let me know if there is anything I can do to try or test.
Thanks in advance.
{
"platform": "XGimiTeleVisionPlatform",
"devices": [
{
"name": "Android TV",
"host": "<IP_ADDRESS>",
"inputs": [
{
"name": "iQiyi",
"type": "APPLICATION",
"package": "com.gitvjimi.video"
},
{
"name": "YouTube",
"type": "APPLICATION",
"package": "com.liskovsoft.videomanager"
}
],
"manufacturer": "XGimi",
"model": "Mogo Pro+ P_FHD_2020",
"serialNumber": "DSXXXXXXXXXX",
"firmwareRevision": "4.9.113"
}
]
}
[3.12.2021, 14:27:47] [XGimiTeleVisionPlatform] Sending UDP Message : {"controlCmd" : {"delayTime":0,"mode":6,"time":0,"type":4}, "action" : 20000}
[3.12.2021, 14:27:47] [XGimiTeleVisionPlatform] Turning On Tv
[3.12.2021, 14:27:50] [XGimiTeleVisionPlatform] TV is not responding...
[3.12.2021, 14:29:02] [XGimiTeleVisionPlatform] Sending UDP Message : {"controlCmd" : {"data" : com.liskovsoft.videomanager, "type" : 1, "mode" : 7, "time" : 0}, "action" : 20000}
[3.12.2021, 14:29:02] [XGimiTeleVisionPlatform] set Input Resource: YouTube
[3.12.2021, 14:29:15] [XGimiTeleVisionPlatform] Sending UDP Message : {"controlCmd" : {"data" : com.gitvjimi.video, "type" : 1, "mode" : 7, "time" : 0}, "action" : 20000}
[3.12.2021, 14:29:15] [XGimiTeleVisionPlatform] set Input Resource: iQiyi
[3.12.2021, 14:29:21] [XGimiTeleVisionPlatform] Sending UDP Message : KEYPRESSES:35
[3.12.2021, 14:29:21] [XGimiTeleVisionPlatform] set Input Resource: HomeScreen
[3.12.2021, 14:29:30] [XGimiTeleVisionPlatform] Sending UDP Message : {"controlCmd" : {"delayTime":0,"mode":6,"time":0,"type":2}, "action" : 20000}
[3.12.2021, 14:29:30] [XGimiTeleVisionPlatform] Turning Off Tv
[3.12.2021, 14:29:49] [XGimiTeleVisionPlatform] Sending UDP Message : {"controlCmd" : {"delayTime":0,"mode":6,"time":0,"type":2}, "action" : 20000}
[3.12.2021, 14:29:49] [XGimiTeleVisionPlatform] Turning Off Tv
@DJay-X hey mate! did you get a solution? have the same problem with my projector.
@DJay-X hey mate! did you get a solution? have the same problem with my projector.
Hey. Unfortunately, no. The plugin never worked for me.
@DJay-X hey mate! did you get a solution? have the same problem with my projector.
Hey. Unfortunately, no. The plugin never worked for me.
Sad news...seems like we need chatgpt to make it work. maybe something changed in Homekit with iOS updates
|
GITHUB_ARCHIVE
|
Support setting React PropTypes using TypeScript types
I'm not sure how practical this would be, but since TypeScript is now supporting React's JSX syntax, it could perhaps provide more syntactic sugar for a common React pattern? Namely, that of setting PropTypes, requirements for the structure of property arguments for React components. Sounds like TypeScript's syntax should be able to remove a lot of the boilerplate there.
PropTypes is used for type checking in developement mode. TypeScript already does type checking at compile time, i don't think that you need PropTypes if you use TypeScript.
Hmm, that's true, I guess :)
Although, then again (sorry for the change-of-mind), props aren't passed as an argument to the component, right? Thus, there isn't really a place in the component to define what its given properties should look like...
//cc: @RyanCavanaugh
Your props should be defined in an interface which is passed as a type parameter to React.Component<P, S> when you define a component. I don't see any real use for React's runtime type checking of props when using TypeScript.
Ah, that sounds good - sorry, I only have experience with React's non-ES6 syntax since I'm using mixins, so if you can define the interface for your props using the class syntax, that should cover it :)
no worries :)
as some additional advice, you'll want your props to inherit React.Props to automatically get things like this.props.children:
interface IFooProps extends React.Props<Foo> {
// ...
}
interface IFooState {
// ...
}
class Foo extends React.Component<IFooProps, IFooState> {
// ...
}
I might be missing something, but without proptypes support, how do I write the equivalent of
var MyComponent = React.createClass({
propTypes: {
children: React.PropTypes.element.isRequired
},
render: function() {
return (
<div>
{this.props.children} // This must be exactly one element or it will throw.
</div>
);
}
});
```?
interface IMyComponentProps extends React.Props {
}
interface IFooState {
}
class MyComponent extends React.Component<IMyComponentProps, IMyComponentState> {
render() {
return (
{this.props.children}
);
}
}
React.render(
Body,
body
);
makes the children optional, but
interface IMyComponentProps extends React.Props {
children: React.ReactChild | {} | any[] | boolean;
}
yields the compilation error `error TS2324: Property 'children' is missing in type 'IMyComponentProps'.`
There's currently no type system enforcement of child elements.
We looked into this but a) there doesn't seem to be a lot of code that has meaningful constraints on its children and b) we would have had to encode a ton of very React-specific semantics into the type system, which we weren't comfortable doing at this point.
What about the case when building a react library to be used by others. Others may not be using typescript. It would be nice to have a way to generate PropTypes when creating the distributed lib so other developers will be able to get PropType validation during development when using the library. React then strips theses away when building production.
as the @TheSharpieOne said, this is stopping us from porting main libraries to typescript since we have multiple consumers which in turn stops us from migrating the consumers to typescript (since it feels to be rather error prone to manually add both and keep them in sync)
what @TheSharpieOne and @TobiasBales said. We kinda need support for it.
I came up with an idea, that satisfies both typescript and react runtime prop validation, but it has flaws.
Say we have a component named Icon, it has two props name and size.
const iconPropTypes = {
name: pt.string.isRequired as any as string,
size: pt.number as any as number
}
type TProps = typeof iconPropTypes
class Icon extends Component<TProps, {}> {
...component content
}
Icon['propTypes'] = iconPropTypes
The problem is obvious in typescript size is required when it's not supposed to, and I have not found a solution to work this around.
The effective difference between React PropTypes and TypeScript types is that, while TypeScript type annotations can describe what JSON schema the server should deliver, React detects what is actually delivered. If the wrong data is delivered to the client, React PropTypes will give us good error messages while the transpired TypeScript will die without explanation.
While the two systems clearly overlap, this feature can be justified in serving to automatically check server data at runtime.
Also, note that React PropTypes are fairly limited ("this property must be an object") while TypeScript could generate a more complete custom validator ("this property must be an object with an integer id and a string name")
React PropTypes can describe objects in more detail, e.g. PropTypes.shape
@TobiasBales - thanks for that; you're right. PropTypes.shape is darn close, conceptually, to a straightfoward bit of runtime typechecking.
I wonder whether it is possible to write a PropTypes object for each type using the TypeScript compiler API and type checker. Notice how the example lets you find type and and member info, build a complex object describing the type, and output structured content. You could do something like;
use the compiler API to get type and member info
use it to generate a validation routine per type, and the PropTypes object for each component,
call it from your main TypeScript files.
Not as easy as full language support, and it's just an idea, but possibly something you could get working with a bit of hacking about.
Hey all, would prop-types-ts get you close to what you want? Instead of creating proptypes out of TypeScript types, it makes TypeScript types from proptypes.
Some libraries written by TS, compiled to JavaScript and published on npm registry, like https://github.com/ant-design/ant-design-mobile, is there a way to use its propTypes in JavaScript?
If typescript had a feature for development to ensure data from server is in the correct schema...
@omril1 what ?
This might help someone, if you want to make a props to be required and force TypeScript to throw an error about it you can accomplish it by using an interface for the Generic Props of the React.Component (I know this is already explain in the thread, but no one pointed out the optional operator that It's explained later in this comment).
interface Props {
name: string
}
export class Person extends React.Component<Props, {}>
If you later try <Person/> without providing the name attribute it will throw a compile error, the counterpart of this would be to use the optional operator ?
interface Props {
name?: string
}
With the above interface TypeScript won't complain whenever we do <Person/>, because the name is optional according to the use of the optional operator in the interface attribute definition.
For anybody still interested in the original proposal, I've put together a proof-of-concept webpack loader here https://github.com/grncdr/ts-react-loader#what-it-does that will inject static propTypes and contextTypes into React components.
Maybe re-open this issue @Vinnl ? My use case is when using ts and non-ts files at the same time (e.g. when I incrementally rewrite my React components to TypeScript). My non-ts files don't warn me when I pass the wrong props, and I have no runtime checks either.
|
GITHUB_ARCHIVE
|
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace EmployeeRegister
{
public partial class FrmEmployeeList : Form
{
// Constructor, these have no return type
public FrmEmployeeList()
{
InitializeComponent();
// Initialise the combo box, the combo box is now in the class
CboEmployeeType.DataSource = ClsEmployeeDetails.EmployeeType;
CboEmployeeType.SelectedIndex = 0;
// Combo box for the sorting by DOB or Name
CboEmployeeDetailsSortChoice.DataSource = _SortStrings;
CboEmployeeDetailsSortChoice.SelectedIndex = 0;
}
// DOB comparer, the : means that the interface in implemented not inherited because it is an interface, classes can inherit from just one base class but can implement a number of interfaces
class ClsDOBComparer : IComparer<ClsEmployeeDetails>
{
// Telling the comparer what we are going to be comparing
public int Compare(ClsEmployeeDetails prEmployeeDetailsX, ClsEmployeeDetails prEmployeeDetailsY)
{
// Creating a local variable that will sort via DOB first then name
int lcDOB = prEmployeeDetailsX.DOB.Date.CompareTo(prEmployeeDetailsY.DOB.Date);
// If lcDOB is not empty
if (lcDOB != 0)
{
// Sort by DOB
return lcDOB;
}
else
{
// Comparing one employee name with another name to see who is first and returning the result
return prEmployeeDetailsX.Name.CompareTo(prEmployeeDetailsY.Name);
}
}
}
// Name comparer
class ClsNameComparer : IComparer<ClsEmployeeDetails>
{
// Telling the comparer what we are going to be comparing
public int Compare(ClsEmployeeDetails prEmployeeDetailsX, ClsEmployeeDetails prEmployeeDetailsY)
{
// Creating a local variable that will sort via name then DOB
int lcName = prEmployeeDetailsX.Name.CompareTo(prEmployeeDetailsY.Name);
// If lcName is not empty
if (lcName != 0)
{
// Sort by name
return lcName;
}
else
{
// Comparing one employee DOB with another DOB to see who is first and returning the result
return prEmployeeDetailsX.DOB.Date.CompareTo(prEmployeeDetailsY.DOB.Date);
}
}
}
// Array of comparers
private IComparer<ClsEmployeeDetails>[] _Comparer = { new ClsNameComparer(), new ClsDOBComparer() };
// Corresponding array of display strings
private readonly string[] _SortStrings = { "Name", "DOB" };
// Refreshes the contents of the listbox
private void UpdateDisplay()
{
// Declaring a new employee list variable and assigning it to the value column of the employee list dictionary
List<ClsEmployeeDetails> lcEmployeeList = ClsEmployeeList.EmployeeList.Values.ToList();
// Calling the sort method of this list, passing the appropriate comparer from the comparer array
lcEmployeeList.Sort(_Comparer[CboEmployeeDetailsSortChoice.SelectedIndex]);
// Assigning the sorted employees list to thelistbox data source
LstEmployees.DataSource = lcEmployeeList;
}
// Create employee button
private void BtnCreateEmployee_Click(object sender, EventArgs e)
{
// Call create employee method
CreateEmployee();
}
// Modify employee button
private void BtnModifyEmployee_Click(object sender, EventArgs e)
{
// Locating the selected employee in the list box and editing it
ClsEmployeeDetails lcEmplyeeDetails = (ClsEmployeeDetails)LstEmployees.SelectedItem;
// If no employees exist
if (lcEmplyeeDetails == null)
{
// Call the message box method and display it
NoEmployeeToEdit();
}
else
{
// Else edit the employe details
EditEmployeeDetails();
}
}
// Close employee list form button
private void BtnClose_Click(object sender, EventArgs e)
{
// Closing the employee list form only
this.Close();
}
// When user double clicks on an employee in the list box
private void LstEmployees_MouseDoubleClick(object sender, MouseEventArgs e)
{
// Call edit employee details method
EditEmployeeDetails();
}
// Delete employee button
private void BtnDeleteEmployee_Click(object sender, EventArgs e)
{
// Locating the selected employee in the list box and deleting it
ClsEmployeeDetails lcEmplyeeDetails = (ClsEmployeeDetails)LstEmployees.SelectedItem;
// If the list is empty display message box saying there are no employees to delete
if (lcEmplyeeDetails == null)
{
DialogResult NoEmployeeToDelete = MessageBox.Show("There are no employees to delete", "No Employees",
MessageBoxButtons.OK, MessageBoxIcon.Error);
if (NoEmployeeToDelete == DialogResult.OK)
{
return;
}
}
else
{
// Display message box asking if user wants to delete an employee first
DialogResult DeletingEmployeeMessage = MessageBox.Show("You are about to delete an employee, are you sure you want to do this?", "Deleting Employee",
MessageBoxButtons.YesNo, MessageBoxIcon.Warning);
// If user selects yes
if (DeletingEmployeeMessage == DialogResult.Yes)
{
// Call delete employee method
DeleteEmployee();
}
else
{
// Else close the message box and dont delete because the user clicked no
return;
}
}
}
// Create employee method
private void CreateEmployee()
{
// We create a new employee by calling the factory method NewEmployee in ClsEmployeeDetails and stroing it in the local variable lcEmployeeDetails
ClsEmployeeDetails lcEmployeeDetails = ClsEmployeeDetails.NewEmployee(CboEmployeeType.SelectedIndex);
// If the user didn't cancel
if (lcEmployeeDetails != null && lcEmployeeDetails.ViewEdit())
{
// Add the new employee details to the list box, as we are adding to a dictionary we need to put the key in
ClsEmployeeList.EmployeeList.Add(lcEmployeeDetails.ID, lcEmployeeDetails);
// Then we update the display to show the new employee in teh list box
UpdateDisplay();
}
}
// Message box that will pop up if user clicks modify employee if one does not exist
private void NoEmployeeToEdit()
{
// Display message box with message
DialogResult NoEmployeeMessage = MessageBox.Show("There is no employees to edit, would you like to create one?", "No employees",
MessageBoxButtons.YesNo, MessageBoxIcon.Question);
// If user clicks yes
if (NoEmployeeMessage == DialogResult.Yes)
{
// Call create employee method
CreateEmployee();
}
else
{
// Close Message box
return;
}
}
// Edit employee details, now accepts an employee parameter
private void EditEmployeeDetails()
{
// Locating the selected employee in the list box and editing it
ClsEmployeeDetails lcEmplyeeDetails = (ClsEmployeeDetails)LstEmployees.SelectedItem;
// If employee is not empty we edit it
if (lcEmplyeeDetails != null && lcEmplyeeDetails.ViewEdit())
{
// Call update display method
UpdateDisplay();
}
}
// Delete employee details
private void DeleteEmployee()
{
// Find the selected employee in the list
ClsEmployeeDetails lcEmployeeDetails = (ClsEmployeeDetails)LstEmployees.SelectedItem;
// Remove the selected employee
ClsEmployeeList.EmployeeList.Remove(lcEmployeeDetails.ID);
// Update the list box to show that the employee has been removed
UpdateDisplay();
}
// Find employee button
private void BtnFindEmployee_Click(object sender, EventArgs e)
{
// Assigning a local variable
ClsEmployeeDetails lcEmployeeDetails;
// Find the ID that the user entered
if (ClsEmployeeList.EmployeeList.TryGetValue(TxtFindEmployeeDetails.Text,out lcEmployeeDetails))
{
// Once found the employee select it
LstEmployees.SelectedItem = lcEmployeeDetails;
}
else
{
// Else display message box saying that we cant find an employee
MessageBox.Show("Unable to find an employee, please try again", "Can't find an employee");
}
}
// When form loads
private void FrmEmployeeList_Load(object sender, EventArgs e)
{
// Call update display
UpdateDisplay();
}
// When the user selects a different option in the combo box
private void CboEmployeeDetailsSortChoice_SelectedIndexChanged(object sender, EventArgs e)
{
// Call update display method
UpdateDisplay();
}
}
}
|
STACK_EDU
|
Trying to find the closest available spot in a 2D array
I am trying to find the closest available seat.However ,I did try something but i could not find it ,so far after some formatting i came back to my star position.Any suggestions.!
public void nextFree(String seats) {
int colA = 0;
int rowA = Integer.parseInt(seats.substring(1, 2));
int newColB = 0;
int newRowB = 0;
if (seats.equalsIgnoreCase("a")) {
colA = 1;
} else if (seats.equalsIgnoreCase("b")) {
colA = 2;
} else if (seats.equalsIgnoreCase("c")) {
colA = 3;
} else {
colA = 4;
}
for (int i = 0; i < this.table.length; i++) {
for (int k = 0; k < this.table.length; k++) {
if (table[i][k] == "XX") {
newRowB = i;
newColB = k;
}
}
System.out.print("The seat " + colA + rowA + " is not available! The next availale is " + newColB + newRowB);
}
}
We're going to need a little more context than what you have given, what does table[i][k] == "XX" signify? Is that a free seat or the end of your table? How do you know a seat is free in your table? What are you looking for?
In the begging all the array is full with"--" and if a seat is booked it changes to "XX".i have 4 rows and 9 columns, and every row is symbolized with a letter from A to D . What i am looking for is if the use inputs the seat C5 for example to find the closets one around it.
At the moment, you are setting your seat to be the same as the first one you find that has a "XX" in it, which means it's occupied. You should be looking for "--" and you need to do further work to see how close the seat is to the original taken one
How i can find the distance of the free seat from the taken?
Check out this answer here: http://stackoverflow.com/questions/19894294/distance-between-elements-in-2d-array
Or here: http://stackoverflow.com/questions/8224470/calculating-manhattan-distance
Hmm it is helpful ,thanks but I think i can not implement the same code to mine programme. Can i send you the hole code to help me further more if of course you want and can?
public void nextFree(String seats) {
int colA = 0;
int rowA = Integer.parseInt(seats.substring(1, 2));
int newColB = 0;
int newRowB = 0;
int distance = -1;
//Would recommend changing this to a switch statement, might lake it clearer
if (seats.equalsIgnoreCase("a")) {
colA = 1;
} else if (seats.equalsIgnoreCase("b")) {
colA = 2;
} else if (seats.equalsIgnoreCase("c")) {
colA = 3;
} else {
colA = 4;
}
for (int i = 0; i < table.length; i++) {
for (int k = 0; k < table[i].length; k++){
//Calculating current distance away from our chair
//This assumes i is the column and k is the row, you may need to swap those values around
int curDistance = Math.abs(i-colA) + Math.abs(k-rowA);
//If the seat is free and the distance hasn't been set or the current distance is less than the previous distance
if (table[i][k].equals("--") && (distance == -1 || distance > curDistance) {
//Found a closer seat
newRowB = k;
newColB = i;
distance = curDistance;
}
}
}
//Moved this out the for loop, it shouldn't have been inside
System.out.print("The seat " + colA + rowA + " is not available! The next availale is " + newColB + newRowB);
}
Using the Manhattan distance as detailed here
This code does not assume you care about rows, if a seat is behind the current seat and is available, it could be considered closer than one in the same row
The Manhattan distance gives the following results when compared to the master seat:
Row Seat Distance
Master 1 1 0
Next to 1 2 1
Behind 2 1 1
In front 0 1 1
Diagonal 2 2 2
Proof that it's working for me:
Thank you very much I can do it from here ! Just one more question with this am I able to find the closest if the closest is above?
It will search for the closest seat in a grid, so it could be to the side, above or below it.
Turns out I got my calculation wrong, just updating the code. It should be:
Math.abs(x1-x2)+Math.abs(y1-y2); not Math.abs(x1-y1)+Math.abs(x2-y2); as I originally put it
Also don't forget, this equation assumes you don't mind the seat in front, behind or next to as being closest. If you want them in the same row, you'll need to add extra checks to your if function
It solved it but however want to check all the seats no matter where are they for example if i ts in every corner of the array,because if I put A1it will go out the array
How is it going outside the array, at what section are you looking at?
For example if I select the corners e.g A1 or A9 and D1 and D9
The current code will not leave your current array, if you are using this bit:
for (int i = 0; i < table.length; i++) { for (int k = 0; k < table[i].length; k++){
Then you should stay inside your array, if you are leaving the array then something is wrong with the code that you have written I'm afraid
So this code can find every nearest free seat in my array? Everywhere?
Yep, as far as I'm aware
Alright maybe i did something wrong i will try to fix it!
There was a poblem see the outpou!
Enter number of seat: 2
Enter seat number : d4
Enter seat number : d5
You have booked 2 seats.Total Price is: 145.8The seat D4 is not available! The next availale is A1
all the other seats are free and it should not outpout the msg and otherwise it is wrong again!
If it's still not working, you've introduced a bug somewhere, create a new question and someone should be able to help you. I've attached screenshots of the code working for me
Math.abs(x1-x2)+Math.abs(y1-y2); not Math.abs(x1-y1)+Math.abs(x2-y2); Here what i have to change in my code with my variables in order to work?
I updated the code with the correct maths equation:
//This assumes i is the column and k is the row, you may need to swap those values around int curDistance = Math.abs(i-colA) + Math.abs(k-rowA);
I have uploaded in the following link 2 images of how your code works and it has some issues. link
Is i the column or the row as you're doing two different things. You either need to have:
int curDistance = Math.abs(i-colA) + Math.abs(k-rowA); and then newRowB = k; newColB = i;
OR
int curDistance = Math.abs(k-colA) + Math.abs(i-rowA); and then newRowB = i; newColB = k;
At the moment, you're mixing both values up
It worked, thank you for your help. I really appreciate the time you have spent for me.
Your main problem is here:
for (int i = 0; i < this.table.length; i++) {
for (int k = 0; k < this.table.length; k++)
See how the two loops both walk the same distance? That cannot be right surely. Try this:
for (int i = 0; i < table.length; i++) {
for (int k = 0; k < table[i].length; k++)
This now loops down each row (table.length) and then across each column in that row (table[i].length).
Also table[i][k] == "XX" should probably be table[i][k].equals("XX").
Thank you but with these loops will I be available to find the closest one?
@vkaleri - It certainly should be possible. Keep working on the problem until you get it right. Using a debugger might be insightful for you.
You try to determine rowA from a two-character string.
Then you try to determine colA from a one-character string. You should probably rather use String.startsWith() here.
|
STACK_EXCHANGE
|
Bypass your Windows password and make use of your phone alternatively
Chances are, you’ve probably heard about two element verification, where your phone loveroulette can be used being a extra verification beyond your password. Make use of it! However you might not realize that it is possible to stay away from your Windows password and make use of your phone given that main login technique to particular Microsoft apps and solutions theoretically, at the very least. Keep in mind, two element safety is dependant on any two of the three facets: that which you know, everything you have actually, and what you are actually. Generally, two element verification functions by requesting for the password (that which you understand), then texting a rule to your phone, or using an application (everything you have actually). Microsoft’s Authenticator application for Android os and iPhone is the approved solution to receive that code. The blend for the password you realize as well as the rule Microsoft delivers to your unit secures the transaction. If you prefer, you are able to turn your phone into the main verification unit. You may also get a step further. Typically, if for whatever reason the Authenticator software can’t access the host and accept the deal, the software can immediately generate an eight code that is digit changes every 30 moments. To get into it, you may need certainly to tap the downward facing caret next back, which displays the rule.
Nonetheless, the Authenticator application provides a 2nd option to authenticate that’s not instantly apparent.
If you tap that caret once again, a 2nd menu will start, providing the capacity to “set up phone check in.” Here’s how it operates. Some pages that ask for the Microsoft password have little text website link to “Use an application rather.” Authenticator becomes the factor that is first the page’s verification keep in mind, your phone (because of the Authenticator app installed) may be the everything you have component that’s unique to you personally. You’ll nevertheless require a password, however. As well as in host to your Windows password, you’ll either enter your phone’s unlock that is PIN (that which you understand) or touch your hand to its biometric sensor (who you are). Observe that this ongoing works limited to individual Microsoft reports. This feature won’t work with them though the Authenticator app is available for Windows phones. ( If it allows you to angry, you can always whine into the reviews part from the Authenticator web page.) Microsoft additionally promises that increasingly more websites will utilize this solution to authenticate them however for at this time, we now haven’t had the oppertunity to locate any. Which technique is superior? When you yourself have a complex, unique Windows password (you do, right?), entering it and using your phone’s Authenticator app due to the fact extra element is most likely better. But, it’s for you to decide. Unfortuitously (or otherwise not) Microsoft continues to haven’t enabled this process for unlocking your personal computer, you’re away, if your PC senses you (and your phone) have moved away though you can set up Windows to lock your PC when. Head to settings reports that are > register choices, then scroll right down to click the checkbox for Dynamic lock. Now, itself to secure your privacy Over time, it’s likely that Microsoft will try to tie the phone even more strongly to the PC if you walk far enough away (with your phone), your PC will automatically lock. With more than 50 % of Americans now running a smartphone (and millions more users offshore), it’s not hard to realize why. This tale had been updated on October 23 to mirror that the brand new function is into the Windows 10 Fall Creators modify, also to include brand new details.
|
OPCFW_CODE
|
Button: Add minor amendments to button config for click events & docs
Description
Small follow up for https://github.com/FlowFuse/node-red-dashboard/pull/1252 which adds an option for whether or not the click event is emitted, and ensures it's clear that the "topic" defined is always sent.
It also fixes a bug where the pointerup event was taking the pointerdown payloadType.
These changes then enable us to easily monitor how long a button is held down for, such as:
https://github.com/user-attachments/assets/e416a023-af9d-4ba9-9542-185a9cd10a6c
IMHO; this is confusing. Based on the operations we are now offering, I feel the arrangement of events would probably suit being something like:
or
or
Also, I see you have merged the original PR but as far as I can see, the OP did not address the lack of keyboard operation as raised in the comments: https://github.com/FlowFuse/node-red-dashboard/pull/1252#issuecomment-2361239656
Keyboard operations are not supported. i.e. tab onto the button so it is highlighted, press the space bar - the "down" event is not triggered. Releasing the space bar does not fire the "up" event either (the "click" event does trigger)
Also, I see you have merged the original PR but as far as I can see, the OP did not address the lack of keyboard operation as raised in the comments:
Not a blocker for why this shouldn't have been merged though. Raise a new issue please.
IMHO; this is confusing
Curious to know what is confusing about it? Given the layout we have, what are you expecting it to send vs. what it actually sends?
By having the "topic" option just as a standalone row, I'm not convinced it's clear to the user what is being done with that topic.
I did toy with the idea of a topic per button event, but we already have msg._event.type to differentiate the type, so didn't think it necessary.
IMHO; this is confusing
Curious to know what is confusing about it? Given the layout we have, what are you expecting it to send vs. what it actually sends?
By having the "topic" option just as a standalone row, I'm not convinced it's clear to the user what is being done with that topic.
I did toy with the idea of a topic per button event, but we already have msg._event.type to differentiate the type, so didn't think it necessary.
Looking closer, I think I missed a subtly - the payload field for the "click" event is hidden due to the click [x] being unticked.
I assumed there was a payload field below the topic field (cut short in the screenshot) because it was required for the "click through" support. but on second thought, the "click through" doesnt use the entered payload right?
but on second thought, the "click through" doesn't use the entered payload right?
This is a good point... I think it does, as it's an "Emulate Click"? I'll need to check
but on second thought, the "click through" doesn't use the entered payload right?
This is a good point... I think it does, as it's an "Emulate Click"? I'll need to check
Any conclusion Joe?
but on second thought, the "click through" doesn't use the entered payload right?
This is a good point... I think it does, as it's an "Emulate Click"? I'll need to check
Any conclusion Joe?
I've checked and we are correct (unfortunately). As of now (before this PR), the payload value set for "click" on the node edit form IS what is used for "emulate click". That means hiding it is not ideal I'm afraid.
Okay, I've moved the "Emulate Button Click" option to be nested in the optional button click section, so that yo can't get to the emulate, without also enabling the click event in the first place, which feels sensible.
Can confirm, when button emulated is enabled, but the enableClick is false, then emulated event will not happen.
|
GITHUB_ARCHIVE
|
Is there a way to convert a .py file to a .txt/.pdf file, such that is does not save that txt/pdf file into storage?
I'm creating a demo for a chatbot that takes a github repository as an input, such that you can ask it questions about the code in the repo. I was working on the UI and I want the user to be able to view the .py files next to the conversation with the chatbot. I couldn't find a better way other than to convert the code into a text or pdf file and just embed the pdf file on the webpage. But I don't want it to save a new pdf file everytime, so if a user switches the file they want to view, the previous "pdf" gets removed from memory. I am hosting the application using streamlit.
I tried to use pandoc but it gave an error that said it does not accept .py files for conversion but it accepts .ipynb.
Not really a technical issue, just stumped on how to go about it.
Does this answer your question? Display Python code on a webpage using highlight.js and jquery
no, this shows how to do it in .js, streamlit is a python package for local hosting websites. so the file I'm trying to do the conversion of other files in is also in python
You can use the "raw" interface on github to show a file as plain text, without the gitgub GUI. example https://raw.githubusercontent.com/oven-sh/bun/main/src/bun.js/resolve_message.classes.ts
get the text of the code as described by @JohnGordon, and put it inside a code block: import requests; st.code(requests.get("https://raw.githubusercontent.com/oven-sh/bun/main/src/bun.js/resolve_message.classes.ts").text)
also any time you have something expecting a file, but you don't want to write to the local storage, you can use io.BytesIO. Just write the data into it as if you were writing a normal file, then rewind it with seek(0) and pass the buffer to (mostly) anything expecting a file object. Should behave the same, but never gets written to disk.
oh this might help a lot! currently i download the entire input repo from github and then load them using the local directories. is there a way to view the local files raw or would i have to change how i load the repository entirely? @JohnGordon's suggestion does sound like a better idea in the long run
@RaghavKaul io buffers act as buffers, not file systems. possibly, you can use an in-memory file system, depeinding on what you need, there is one implemented for PyFilesystem2 ... you can see the source for how that was accomplished if you are interested... you might be able to walk the github repo using the github API and create the repo in memory use PyFileSystem2
I was being stupid, the simplest answer was right there in front of me. as of now, for the local files, I simply opened the .py using open("file", "r") and used the read() function in place of the body in st.code(body) and it worked! it just never struck me that .py files could be read as text. it was just a frustrated hail mary that I came to do that. Guess my problem's solved for now. Although I will be using your ideas for the final version of the app. Thank You!
@RaghavKaul Smart! Maybe post it as an answer so other people can learn from it :-)
Solved! There was no need to use file converters to have raw text.
For doing this with local files, simply
import streamlit as st
file1 = open("file.py", "r")
st.code(file1.read(), language = "python")
will work like a charm. Realistically, you can pass file1.read() wherever it expects raw text.
For pulling files from github, @JohnGordon and @Aaron's method works best.
import requests
import streamlit as st
st.code(requests.get("https://raw.githubusercontent.com/oven-sh/bun/main/src/bun.js/resolve_message.classes.ts"),language = "python")
This will grab raw text of the file directly from github.
|
STACK_EXCHANGE
|
Everyone’s always looking for holy grail weed in Canada.
However, the holiest of the holy cannabis strains is ripe for the picking in Canada. Meet the Holy Grail strain — a picture-perfect hybrid that’s loaded with enough indica and sativa qualities that will send you to seventh heaven.
Continue reading below to discover everything you need to know about the Holy Grail strain, along with where to buy the best Holy Grail weed in Canada.
You might be interested in Holy Grail Kush Strain (Hybrid)
Holy Grail’s Background
With a name like Holy Grail — its genetics have to be out of this world.
Bred by crossing Kosher Kush and OG #18, it’s clear that Holy Grail lives up to its name — and then some. With OG and Kush dominant traits, it’s no wonder why Holy Grail is one of the most renowned cannabis strains in Canada — let alone the world.
Holy Grail’s Bag Appeal
As you gaze at Holy Grail’s buds, you’ll quickly fall head over heels for this remarkable hybrid.
The medium-sized buds are perfectly dense and drenched in THC-packed trichomes. Additionally, the color of the buds is light green, with bright orange pistils shooting out from every nook and cranny.
Ultimately, the Holy Grail strain is the bud you break out when it’s your time to outshine the rest.
You might be interested in Three Queens Strain – AAAA (Hybrid Indica)
The Flavor and Aroma of Holy Grail Weed
The flavor and aroma of Holy Grail weed are absolutely blissful.
As you open a jar full of Holy Grail, your nose will instantly recognize the pure dankness that’s filled with hashish, cacao, and ample layers of lemon-zest.
Once you indulge, your tastebuds will jump for joy once they experience the overwhelming flavor of juicy citrus and deep-seated kush.
The Effects of the Holy Grail Strain
The effects of Holy Grail weed are in a league of their own.
Routinely tested at 21% THC and above, it’s no wonder why recreational and medical marijuana consumers seek out the Holy Grail strain in Canada.
Overall, the Holy Grail strain’s effects are:
As for medical effects, Holy Grail weed is known to:
- Reduces anxiety
- Reduces depression
- Reduces Pain
- Stimulates the appetite
Where to Buy the Holy Grail Strain in Canada
If you’re ready to experience a weed strain that got the highest score at the High Times Cannabis Cup — look no further than the Holy Grail strain.
Luckily, Platinum Herbal Care has you covered. From unbeatable prices to stunning bag appeal — PHC offers the best Holy Grail weed in Canada.
Don’t wait until it’s too late; the Holy Grail strain sells out fast in Canada — especially at these prices!
Frequently Asked Questions About Holy Grail Strain
What is the Holy Grail strain of cannabis?
The Holy Grail strain is a hybrid cannabis strain made by crossing Kosher Kush and OG #18. It has an impressive balance of indica and sativa qualities, making it a favourite among many recreational and medicinal users alike. This strain has a strong aroma featuring earthy, woody, and citrusy notes that many find to be extremely pleasing. On top of that, it offers a unique combination of effects including euphoria, relaxation, increased creativity, and complete body relief — all in one amazing package!
What are the effects of consuming Holy Grail cannabis?
Most people report experiencing a strong cerebral high after consuming Holy Grail cannabis that can be quite uplifting and creative. At the same time, its indica properties provide full-bodied relaxation with an overall calming effect on the mind and body. Many users have also reported feeling energized after smoking this strain which helps with getting tasks done throughout their day more efficiently than before.
What are the medical benefits associated with consuming Holy Grail cannabis?
Aside from its ability to put one in an upbeat mood as well as providing complete body relaxation, Holy Grail cannabis is also known for having powerful medical benefits as well. Those suffering from chronic pain or inflammation often find relief when using this strain since it contains analgesic qualities that help to reduce aches and pains throughout the body quickly without any additional medications needed. Other medical conditions like depression or anxiety are said to be managed significantly better with the use of this strain due to its mood-elevating properties.
Does consuming too much Holy Grail weed offer any risks?
As with any other type of cannabis consumption, overconsuming this particular strain may lead to some undesired side effects such as dizziness or paranoia if taken in higher doses than recommended for your individual tolerance levels — so it’s important always to use caution when medicating with any form of marijuana product including the Holy Grail strain itself! Additionally, those with pre-existing mental health conditions should exercise extra caution when using this particular variety as it may further exacerbate certain symptoms if used improperly or excessively during sessions.
Where can I buy high quality Holy Grail weed in Canada?
Many Canadian dispensaries carry high quality varieties of the Holy Grail strain nowadays so you shouldn’t have too much trouble tracking some down for purchase online or at local outlets near you! Additionally, there are several online retailers where you can purchase premium versions from reputable growers across Canada– meaning you can rest assured knowing you’re getting only the best quality product available on today’s market!
Product on saleDOOBIES Pre-Rolls Hybrid Strains (10×0.75G)Original price was: $49.50.$34.65Current price is: $34.65.
The Supreme Sampler: Get the Best of Top Shelf Cannabis with a Variety of Strains and Flavours – Save 15%$13.00 – $1,175.00
Product on sale2 Ounces AAAA Cannabis Mix and MatchOriginal price was: $440.00.$375.00Current price is: $375.00.
Product on saleGod’s Green Crack Strain (Hybrid)$7.00 – $105.00
Product on saleGirl Scout Cookies Strain (Indica Hybrid)$7.00 – $105.00
Product on saleGelato Strain (Hybrid)$7.00 – $105.00
Product on saleCookies and Cream Strain (Hybrid)$7.00 – $105.00
Product on saleBlackberry Strain (Hybrid)$7.00 – $105.00
Product on salePink Picasso Strain AAAA (Hybrid)$13.00 – $99.99
Enter your name and email address to spin the wheel for a chance to win. Do you feel lucky?
Our in-house rules:
- One game per user.
- Cheaters will be disqualified.
- Winning prizes only valid for 24 hours.
|
OPCFW_CODE
|
New to Ubuntu/Linux - Which distro should I install?
I'm switching over from Windows 7 to Ubuntu (basically because I want to try something new). I'm a basic user; I really only use my PC for iTunes, Web Browsing, some Steam gaming, and multimedia sharing to my Samsung TV. I'm very organized and want the best looking and fluent system. What should I install? Just the regular Ubuntu or maybe MATE? I don't know the differences in any of these.... please help!
Your question is a matter of opinion so not a good fit for AskUbuntu. Feel free to try them all and make a choice on what -you- like. Canonical (owns Ubuntu) does not restrict you in any way in using Ubuntu.
Not only an opinion based question, but regarding of iTunes and streaming to your Samsung tv, you'll have to ask the specific companies that develop those software to support Linux. Good luck with that.
I'd say set up dual boot. Have both windows and Ubuntu. Now depending on your computer, how much ram or graphics it has , you might go either with Ubuntu or Lubuntu. Lubuntu and X Ubuntu have plus in more familiar interface to a windows user.
possible duplicate of How do I find out which version and derivative of Ubuntu is right for my hardware in terms of minimal system requirements?
I would recommend either Linux Mint or Ubuntu. Before an year, I was a Windows user and tried several versions of Linux. I found Linux Mint as the easiest one, since it comes with most of the codecs and applications by default. Also it's user interface is easy to learn for Windows users. Once you fall in love with Linux (It may take some time, but definitely you will) you can try other distributions and go with your preferred one. (My current OS is Ubuntu 14.04, because of my personal taste.)
A friendly advice dear, don't start with a Linux version which is designed for some special purposes like for hacking or lightweight for old hardware.
Note that AskUbuntu doesn't support Linux Mint. So if that's important, the OP would do better to stick with Ubuntu/Ubuntu MATE.
Thank you for your reminder. Could you please suggest me, what to do? Should I remove this answer?
No, as far as I'm concerned your answer is fine. You could always include a sentence on the end to the effect that he won't be able to ask Linux Mint questions on Ask Ubuntu, and that there is Unix & Linux on Stack Exchange, as well as the Mint forums.
They are both good choices and both fully functional. Ubuntu MATE is arguably lighter for older hardware but beyond that your question is too subjective to answer. It's a matter of taste.
My recommendation is to try them both out in live environments so you can see with a minimum of fuss which you prefer.
This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post.
I beg to differ. My answer is constructive and answers the OP's question fully to the extent it can be answered. Saying his question is subjective is not critiquing it. It is pointing out the limitations of my response.
That is just the standard response from the review que, the specific reason I flagged this was, you answered the question with a question and a question is not an answer Why not try them both out live and see which one you prefer?, leave a comment saying why you disagree but it is out of my hands now it has gone to the mods.
That's a suggestion not a question. I will remove the question mark so the OP doesn't get confused.
|
STACK_EXCHANGE
|
About ORACLE SQL Query
I have a three tables and I merged the tables like below. I want to print the name with an average of nine points for the title and reduce the year by ten.
TITLE
YEAR
POINT
NAME
A
1999
9
K
A
1999
9
L
C
1997
7
M
For this, I wrote the following query but the query fails. What query should I write?
SELECT k.title, k.year, AVG(point), m.name
FROM Table1 k
JOIN Table2 l ON (k.title=l.title)
JOIN Table3 m ON (l.year=m.year)
GROUP BY title;
You said it fails, why does it fail? Does it cause error or not return your desired results or other?
It returned error.
Your group by columns are inconsistent with the select columns. Beyond that, I can't help because you have no desired results and I can't follow the logic.
if you connected the tables correctly, then it's an issue with the GROUP BY clause. If you are using aggregated functions such as (AVG,SUM,COUNT) in the Select clause, all other columns included in the select clause must be added in the group by clause. this should work:
SELECT k.TITLE, k.YEAR, AVG(POINT), m.NAME
FROM Table1 k JOIN Table2 l ON (k.TITLE=l.TITLE) JOIN Table3 m ON (l.YEAR=m.YEAR)
GROUP BY (k.TITLE, k.YEAR,m.NAME);
Edit:
If you want to substract years from the Select clause you can use ADD_MONTHS function like this:
SELECT k.TITLE, ADD_MONTHS( TRUNC(k.YEAR), -12*10 ), AVG(POINT), m.NAME
FROM Table1 k JOIN Table2 l ON (k.TITLE=l.TITLE) JOIN Table3 m ON (l.YEAR=m.YEAR)
GROUP BY (k.TITLE, ADD_MONTHS( TRUNC(k.YEAR), -12*10 ),m.NAME);
you also can add WHERE clause to your query and specify some conditions, the WHERE clause should be added before the group by clause and after the FROM clause.
you basically can replace any DATE data type column with ADD_MONTHS( TRUNC(yourcolumn), -12*10 ) replacing (yourcolumn) with the column you want to substract years from.
best of luck to you.
Thanks for your query. How can I write a query that substract ten years from dates?
this will substract 10 years , you can change the number 10 to any number you want SELECT SYSDATE , ADD_MONTHS( TRUNC(SYSDATE), -12*10 ) FROM DUAL
How can I add this to the query above?
I added the substracting to the answer.
|
STACK_EXCHANGE
|
As a lot of people asked for my opinion about Facebook and the metaverse, and the impact on VR/AR/XR, I decided to write a short article about it.
The initial metaverse presentation video we are talking about
First things first: why now?
The boundaries between reality and XR will largely disappear sooner or later; that is a fact. Facebook is not the first company who is betting big on this; for example: Apple is betting big on creating "normal looking" AR glasses (available somewhere in 2023).
This is not the first time people expected these technologies to become more adapted by the general public, but up until now none of these waves caused enough stir, so why would it be different this time? Here is my bet:
The current generation of kids has grown up with tablets, and half of their lives are already passing online, albeit over Snapchat, Fortnite, Roblox, Minecraft, Instagram and others. As these people are turning into adults, and they will be more perceptive and familiar with the possibilities of AR and VR over the next decade.
About the video - hi Mark!
The odd thing is the metaverse marketing campaign tried to be as accessible as possible to the general public, so they modeled everything as close to the real world representation as possible, which resulted in a watered down version of what to expect.
Trying to model the metaverse after the real world is stupid, and not only because of a hypothesis known as "the uncanny valley".
From wikipedia: The uncanny valley hypothesis predicts that an entity appearing *almost* human will risk eliciting cold, eerie feelings in viewers.
You need to find out which aspects of the real world are useful in a particular use case and create a new virtual representation that offers a clear advantage over the real world model for that specific use case.
This is just like in software development or art: all models are wrong, but some are useful. You need to find a model where your alternate representation provides an added value.
I didn't spot anything particularly useful in the metaverse video... Sure, there were some novelties, but nothing that said "this is why we need the metaverse".
I have a few ideas about certain metaverse models that might make a huge difference, but even for somebody who's been doing this professionally for years, it is hard to imagine what will come in the next decade.
How will VR/AR/XR evolve over the next decade
I sure hope the future will not be as dystopian as in this excellent video
I am glad that Facebook launched the metaverse, as it will help general XR adaptation by the public. At the beginning, it will look more like a toy with little added value, like the live face filters that you currently see everywhere: a novelty but the added value is limited.
Some interesting use cases are already emerging, like virtual clothing, - hairstyles, - makeup and our very own - product configurators, but in my opinion this is only a miniscule fraction of what to expect.
Our product configurators allow consumers to upload a picture and easily visualize your product on it, after they configured the product to their likings
However, use cases will evolve enormously once we get accustomed to digital twins, and the ways to interact with them will be considered public knowledge. Only then will we see digital twins emerging everywhere.
From wikipedia: A digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object or process.
Right now we're still in the "assembly phase" of digital twins: too much low level stuff needs to be done before you can start adding value, and adding this value takes a lot of effort... In order to take off, the ability to build these digital twins need to be more accessible to the general public.
Simple annotations are happening right now, I imagine making these interactive and having a common way to manipulate them will come next... Annotations could be attached to both virtual and real world objects...
For now these annotations are highly coupled to an almost exact replica of the real world. The minute when these annotations become intuitive to the public, we can start building more abstract annotation models, and then these abstractions will have to become known to the public.
Eventually, as the common public will move up the abstraction ladder, more interesting use cases will emerge, as making your own digital twins gets more accessible with better interaction... Once everyone can do it and has a valuable use for it, adaptation will start booming.
It is not that hard to imagine that the initial Facebook/metaverse presentation will look outdated pretty fast, but I also assume that what they made was the best possible thing they could make right now; just wait and see what they will do in the next decade.
The age-old adage still stands: "people vastly overestimate what they can do on the short term, but hugely underestimate what they can do in the long term"; compound progress is real, so I have high hopes for the future!
|
OPCFW_CODE
|
Can I fix VMware Fusion to correctly map the tilde / back tick on a British keyboard?
I'm stuck with an odd keyboard layout in Fusion Player v 12.0.1 on macOS 10.15.7 (Catalina) and cannot seem to find a way around it.
The keyboard physically is a bluetooth Apple keyboard UK layout with tilde/back tick between he left shift and Z keys. This is set to 'British' in the system preferences and works fine (i.e. I can type `~§± and the correct symbols show in applications.
I'm running a new VM with Debian 10 in it but cannot find a way to get these keys to map correctly despite working in copy from Mac / paste to VM fine.
Is there a solution to the crossed over keys?
Since you said latest version I’ve linked to that. Please edit this if I guessed the wrong version of latest for your macOS and if you are using fusion pro instead of fusion player.
Install VM Tools from the Fusion Menu (ignore notice about using open-vm-tools)
Select keyboard & mouse from the VM settings
Highlight Profile - default on the list of profiles
From tools (cog below list of profiles) - select duplicate and name 'Debian Profile'
From tools - set profile as default
Double click profile and select Keyboard and Mouse
Make sure Enable Key Mappings is *checked
Make sure Language Specific Key Mappings is not checked
Use the '+' to add a new mapping of From: § To: `
Use the '+' key to add a new mapping of From: ` To: §
e.g. for the first mapping:
That should then take effect once you have a session running.
On the Debian side, you may have to configure the keyboard from the command line if you have not set it up / not running multiple languages.
First check if cat /etc/default/keyboard gives you:
# KEYBOARD CONFIGURATION FILE
# Consult the keyboard(5) manual page.
XKBMODEL="apple"
XKBLAYOUT="gb"
XKBVARIANT="mac"
XKBOPTIONS="lv3:ralt_switch"
BACKSPACE="guess"
If so, ignore the rest and pat yourself on the back / get a cuppa :-)
If not install the command line configurator:
sudo apt install keyboard-configuration
Then execute it:
sudo dpkg-reconfigure keyboard-configuration
Select:
Keyboard Model: Apple
Keyboard Layout: English (UK) : English (UK, Macintosh)
AltGr: Right Alt (AltGr)
Compose Key: ???? - I do not use one - select as needed
Control+Alt+Backspace to terminate: ???? - your choice
Then REBOOT
The other way is to add the Keyboard Layout Handler into a panel BUT this sticks a big blob of text on your menu bar! If you are happy with this (or are switching languages) then the UK keyboard can be set as:
Note the checkboxes for do not reset options and keep system layouts are clear.
Superbly well done explanation of a tricky sequence of steps
Thank you kindly; I've been messing around with kinto and other solutions to get this working and finally found your writeup.
|
STACK_EXCHANGE
|
From a new view point, primary considerations related to chromatic adaptation is performed. In the long history of chromatic adaptation, almost all models have been constructed based on psychometric testing. In this paper, a hypothetical principle of chromatics adaptation is defined as the minimized state of the metameric black component of an image and is defined as the enhancement of the efficiency of the perception. There have been no discussion related to chromatic adaptation from the view point of metameric black control.
Conventional image compression techniques based on transform coding to frequency domain have a problem of image quality, when images contain sharp edges like character patterns are compresed. In order to solve this problem, we have developed Adaptive Resolution Vector Quantization (AR-VQ) method and a systematic codebook design method applied to all kinds of images without using learning sequences for 4x4 and 2x2 pixel blocks. By using these methods, consequently, we can realize much superior compression performance than the JPEG and the JPEG2000. On the compression of the XGA (1024x768 pixels) images including text, for instance, there exist an overwhelming performance difference of 5 to 40 dB in compressed image quality.
Vector quantization(VQ) is one of the image compression technologies. VQ has the feature that mosquito-noise does not arise, but it has a problem in compression in a low frequency domain. In order to improve this problem, the generalized harmonic analysis(GHA) codes a low frequency component. The GHA can analyze a signal in a low frequency domain with a good performance. Therefore, this paper proposes the compression scheme of using the GHA in combination with VQ for improving image quality, and estimates the proposed scheme. The proposed scheme codes a low frequency component by the GHA, and applies VQ to the region with the GHA's error more than a threshold.
It is required for the computer aided diagnosis that 3-dimentional brain area is extracted accurately from head MRI images. The Region Growing method, which has been used in our method, has the following problems; i) a small leakage in ROI (Region of Interest) expands into other region, and ii) an isolated region on a slice image is not able to be extracted in spite of such region is connected 3-dimansionally. Therefore, we propose an efficient extraction method to prevent leakage by the edge detection, and to extract isolated region by two-way Region Growing with restricted processing considering the correlation coefficient between each slice image. As a result, our proposed method has been able to improve the accuracy of extraction compared with our previous method.
Here is an image transmitting technique using SSTV system which enables us to make band width within 3.0 kHz by transforming image data into audio signal. In this paper, the SSTV technique is applied to a transmission of three dimensional images, using the hologram which enables us to record in three dimensional space as fringe patterns. As a result, it is found that the excellent three dimensional images are reconstructed from the transmitted data of the hologram adopting SSTV system.
This paper writes about a development of an interactive circus system. The system is based on dynamics and developed for edutainment. Acrobatic motion simulator and editing tool are implemented. The system enables users to learn basic physical dynamics easily. And also enables creative character animation work. The system provides recording function for motion editing. Users can save the simulation data and review the animation operating real time camera work.
The realistic modeling and rendering of fabrics is a challenging task in computer graphics. The application of cloth modeling and rendering are found in computer aided design, Multimedia and fashion. This paper discusses the importance of fabric buckling appearance and it presents an image-based method to generate fabric buckling by using Gabor wavelets. Since modeling the geometric details is prohibitive in terms of memory requirement and rendering time, Gabor wavelets proves to be a very useful texture analysis that have the benefits of Fourier and locality. In this paper Gabor filters were configured as a multi resolution filter banks for texture analysis and these filter banks will be synthesized to extract fabric surfaces. The validation of the proposed approach is given by discussion and illustration of experimental results.
We present an efficient algorithm for complex 3D object reconstruction from a set of unorganized points in 3D . This algorithm is based on Radial Basis Functions (RBFs) which provide an efficient compact model for the surface description. However, for large data sets, estimating the RBF parameters is memory and time consuming. In this paper we propose a new approach to estimate the RBF parameters for large data sets. Unlike previous approaches our algorithm avoids adding the inside and outside constraints required by the distance function. In a firrst step, our algorithm uses only the boundary constraints and solves for the derivative of the implicit function, so the set of RBF centers, thus the problem size is reduced to one third. This results in significant compression and computational advantages. Then, in a second step, the problem is further optimized by an efficient boundary constraints reduction. We will demonstrate through the experiments that this approach improves the fitting and evaluation times and the required storage.
This work generalizes the concept of blobs defined in the scale-space theory to contain color information. We also propose an image categorization algorithm based on those color blobs. Our approach relies on color quantization using basic color categories and the successive classification of the resulting regions using neural networks. We show that the preattentive information can be used to categorize even low resolution pictures taken with mobile devices. Thus, we propose a framework in a mobile environment where the classification and additional contextual information, such as GPS location, are used to assist user in such tasks as navigation or self-configuration of the device.
|
OPCFW_CODE
|
#A Safety Buzzer that will trigger a alarm and send alert SMS along with your location to family/friends
from twilio.rest import Client #twilio api is used for sending SMS
import tkinter as tk #for GUI
from pygame import mixer #to play the alarm
import csv #handles .csv file
import requests
import json #handles .json file
#this .json file contains credentials for ipapi and twilio
f=open('Access_keys.json')
key=json.load(f)
#return location url
def ip_ret():
ipapi_access_key = key['ipapi']
url = f"http://api.ipapi.com/check?access_key={ipapi_access_key}&output=json/"
geo_j=requests.get(url)
geo_j=geo_j.json()
inf=f"\nIP address: {geo_j['ip']}" \
f"\nPIN code: {geo_j['zip']}" \
f"\nCity: {geo_j['city']}" \
f"\nState: {geo_j['region_name']}" \
f"\nCountry: {geo_j['country_name']}" \
f"\nGeoname id: {geo_j['location']['geoname_id']}" \
f"\nLatitude: {geo_j['latitude']}" \
f"\nLongitude: {geo_j['longitude']}" \
f"\n\nGoogle map Link: https://www.google.com/maps/search/?api=1&query={geo_j['latitude']}%2C{geo_j['longitude']}&authuser=1"
return inf
#send sms to saved numbers
def send_sms():
f=open("contact.csv","r")
list=csv.reader(f)
num=""
tw_sid=key['twilio_sid']
tw_auth=key['twilio_auth_id']
for row in list:
num=num.join(row)
client=Client(tw_sid,tw_auth)
client.messages.create(to=('+91'+num),from_=f"+14355710253",body=f"Help please, I am in trouble. Here is my location:\n{ip_ret()}")
f.close()
#play buzzer sound and send sms
def buzzer_fn():
mixer.init()
mixer.music.load("electric-buzz.wav")
mixer.music.play(loops=10)
mixer.music.set_volume(1.0)
send_sms()
#making the tkinter window for GUI
buzz=tk.Tk()
buzz.geometry("600x500")
buzz.title("Emergency Buzzer")
#label
fnt=("Ariel",20,"bold")
label=tk.Label(buzz,text="EMERGENCY BUZZER!!!",fg='Red',font=fnt)
label.pack()
#buzzer button
bi=tk.PhotoImage(file="buzzer_img.png")
buzzer=tk.Button(buzz,image=bi,borderwidth=0,command=buzzer_fn)
buzzer.pack()
# label
fnt = ("Ariel", 20, "italic")
label = tk.Label(buzz, text="Police: 100 Ambulance: 102\nWomen helpline: 1091 Fire Brigade: 101", fg='Red', font=fnt,pady=-10)
label.pack()
buzz.mainloop()
|
STACK_EDU
|
#include "moleculegrabber.h"
#include <qdebug.h>
MoleculeGrabber::MoleculeGrabber()
{
}
cv::Mat MoleculeGrabber::grabMolecule(cv::Mat src){
cv::vector<cv::vector <cv::Point> >contours;
cv::Mat threshOut;
cv::Mat img,srcBW,im;
//Scale image to fit
if(src.rows>1000)
cv::resize(src,im,cv::Size(),0.1,0.1);
else
im=src;
cv::cvtColor( im, srcBW, cv::COLOR_RGB2GRAY);
//remove noice and gridlines
cv::bilateralFilter(srcBW,img,50,75,5);
cv::adaptiveThreshold(img,threshOut,255,cv::ADAPTIVE_THRESH_GAUSSIAN_C,1,125,0);
//dilate to combine fragments
cv::Mat kernel =cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(25,25));
cv::dilate(threshOut,threshOut,kernel);
//find contours
cv::findContours(threshOut,contours, CV_RETR_LIST ,cv::CHAIN_APPROX_NONE,cv::Point(0,0));
cv::vector<cv::Rect> boundRects(contours.size());
cv::Point imgCenter=cv::Point(im.cols/2,im.rows/2);
double minDist=INT_MAX;
int nearInd;
//Find countour that is closest to center
for(int i=0;i<contours.size();i++){
boundRects[i]=cv::boundingRect (cv::Mat(contours[i]));
cv::Point center=(boundRects[i].tl()+boundRects[i].br())*0.5;
cv::Point diff=center-imgCenter;
double dist =sqrt((diff.x*diff.x + diff.y*diff.y));
if(dist<minDist){
//choose only contour that are large enught
if((boundRects[i].width*boundRects[i].height > 1000 )){
minDist=dist;
nearInd=i;
}
}
}
cv::Mat mol(im,boundRects[nearInd] );
// cv::namedWindow( "image", cv::WINDOW_AUTOSIZE );
// cv::imshow( "image", mol );
// cv::waitKey(0);
if(src.rows>1000) cv::resize(mol,mol,cv::Size(),2,2);
cv::namedWindow( "img", cv::WINDOW_AUTOSIZE );
cv::imshow( "img", mol );
return mol;
}
|
STACK_EDU
|
Create a Global Keyword List and Assign It to a Project
A global keywordIn Project Center, keywords are words you can add to project items to use for filtering and searching. When you add keywords, they appear in the Keywords column of the corresponding Project Center activity centers and dialog boxes. You can then filter the list of items using the keywords to quickly find the items you are looking for, as well as do a project search for items containing the keywords. list is a set list of keywords that you can assign to any project. Users can then select the keywords from the Choose Keywords dialog box to assign them to project items. Perform the following two procedures to create a new global keyword list and assign it to a project:
You must be a Project Center administrator to perform this procedure.
If you create a new custom keyword list for a particular activity, you must link each type to at least one keyword.
1. If you are not already there, open the Project Center Administration activity center (shown below) by clicking Project Center Administration from the Tasks panel of the My Project Center activity center, or from the Activities list.
2. Click the Keywords tab, as shown below.
3. From the Keyword Lists section, click Add to open the Add Project Keyword List dialog box, as shown here:
4. Enter a name for the list in the Project Keyword List field. In this example, we will create a new action list for RFIs.
5. Select the item type you want to use the list for from the Keyword List Type drop-down list, then click OK to add the list to the Keyword Lists section.
6. Double-click the new list from the Keywords Lists section to open the Project Keyword List dialog box, as shown here, to define the keywords in the list.
7. Click Add to open the Add Project Keyword dialog box, as shown here. Enter the name (keyword), a description, and select its type. (The Type determines which type of action the keyword will appear as a choice for.) Click OK after entering each keyword. Repeat this step until you have entered all of the keywords you want to add to the list. Click OK when finished.
8. Click Save Changes to save the changes to the Project Center database. The keyword list is now available to be assigned to the corresponding item type for any project.
1. If you are not already there, open the Project Center Administration activity center by clicking Project Center Administration from the Tasks panel of the My Project Center activity center, or from the Activities list.
2. From the Projects tab, select the project you want to apply the keywords list to.
4. In this example, since we created a new list for an RFI action, click the Activity Center Setup tab and select RFIs from the Activity Center list.
If you created a list for action items, you would select Action Items from the Activity Center list; for timeline the same thing, for transmittals the same thing, etc.
5. In the Define List Values section, double-click the name of the item field you want to apply the new keyword list to, which opens the Assign Project List dialog box, as shown below. For this example it would be the UAC RFI Action List.
6. Select the name of the new list you created in the procedure above (RFI Action List 3 in this example) from the Project List drop-down to assign it to the selected item and field for this project. Refer to the following example:
7. Click OK twice to apply the changes. The new values will be available for this project for all users for the item and field selected.
Table of Contents
Search (English only)
|
OPCFW_CODE
|
The Tube is your number one source for a wide range of videos relating to underground electronic music culture.
🎥 I Remember, a Boom community crowdsourced short film: In 2020 due to the Covid-19 pandemic, Boom Festival had to be rescheduled.
An open call to Boomers to share their experiences and create a collective memory project to keep the Boom spirit alive.
🕉This video is dedicated to all the people that have nurtured the
Ellen Allien DJ set @ UNLOCKED party MOB disco theatre Palermo by LUCA DEA.
Follow LUCA DEA:
This video was originally posted by Luca Dea
Día 2 – Sábado 06 Octubre 2018:
HERNAN CATTANEO extended set
Forja Centro De Eventos
Respeto ante todo
Respeto por todos
This video was originally posted by BUENAS NOCHES PRODUCCIONES
Watch @Miss Monique performing Live @ Radio Intense.
Follow @Miss Monique :
Radio Intense is a streaming platform dedicated to promoting artists, venues & electronic dance music. With regular live streams of DJ sets including a wide range of new electronic music (Techno, House,
Follow Radio Intense:
Audio version at:
Enjoy the best music, on Radio Intense and watch full dj sets, live acts from our broadcasting studio, festival, nightclub. Dj mixes in techno music, house, progressive house, deep house, indie and other genres. Mix of livestream, records,
Unity Live brings awareness to important causes through music. Stand together and learn more about racial injustice. We can all do better in future. Learn and donate: bit.ly/2Y4m808
This video was originally posted by Afterlife
Pan-Pot at virtual Gashouder for Awakenings Festival 2020 | Online weekender
Find out more about the Awakenings experience here: https://awak.enin.gs/flive2020
This video was originally posted by Awakenings
This video was originally posted by Lucianocadenza
Please donate to UNICEF’s COVID-19 Relief fund here: https://www.justgiving.com/fundraising/top100djs2020
Subscribe to DJ Mag TV: http://bit.ly/Oduqwo
Top 100 DJs: http://www.djmag.com/top100djs
This video was originally posted by DJ Mag
As Selected continues to grow, so does their vision. Each and every fan inspires Selected every day, to find the best music, produce the best content and continually push boundaries to re-imagine what Selected is.
Coming from Berlin, the U-Bahn has always been a staple of the culture and our lives. For as long as
|
OPCFW_CODE
|